Test Report: KVM_Linux_crio 18610

                    
                      67827f9862f5d1dcb60fbd876ba9804ed0a42712:2024-04-10:33972
                    
                

Test fail (28/321)

Order failed test Duration
21 TestDownloadOnly/v1.30.0-rc.1/json-events 29.21
39 TestAddons/parallel/Ingress 156.24
53 TestAddons/StoppedEnableDisable 154.3
176 TestMultiControlPlane/serial/RestartClusterKeepsNodes 364.31
179 TestMultiControlPlane/serial/StopCluster 142.04
239 TestMultiNode/serial/RestartKeepsNodes 306.43
241 TestMultiNode/serial/StopMultiNode 141.63
248 TestPreload 338.59
256 TestKubernetesUpgrade 345.35
290 TestPause/serial/SecondStartNoReconfiguration 52.79
293 TestStartStop/group/old-k8s-version/serial/FirstStart 271.29
298 TestStartStop/group/no-preload/serial/Stop 139.05
303 TestStartStop/group/embed-certs/serial/Stop 139.2
304 TestStartStop/group/old-k8s-version/serial/DeployApp 0.51
305 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 87.82
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.42
312 TestStartStop/group/old-k8s-version/serial/SecondStart 764.25
315 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.09
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
318 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
320 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.28
321 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.37
322 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.39
323 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.42
324 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 544.27
325 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 481.08
326 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 244.73
327 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 130.43
x
+
TestDownloadOnly/v1.30.0-rc.1/json-events (29.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-753930 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 start -o=json --download-only -p download-only-753930 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: exit status 40 (29.21266626s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2121549f-72da-4ee3-b3d5-89097b6d497b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-753930] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3c2bc477-3cbc-4b1a-8775-d2d2b46c8edf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18610"}}
	{"specversion":"1.0","id":"dd82d95e-036b-451b-90af-b1132cd10e07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0a24d8f8-3338-45a6-85db-5107974bd810","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig"}}
	{"specversion":"1.0","id":"33d72f64-4ce7-40cd-8288-611a4dbb84e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube"}}
	{"specversion":"1.0","id":"9940af82-1775-4530-9326-ebcf3966f4f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"2475f955-bf71-43b0-b72c-fb8f562dc508","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"1c6828f3-908e-4a20-ae32-296550f2f7ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the kvm2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"39445728-0672-4f8e-8972-b422c0d807cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-753930\" primary control-plane node in \"download-only-753930\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6ae8d574-57d6-4566-b754-2d3ebadf8a17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.30.0-rc.1 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b1e7bd54-abbc-4104-83c6-dc6385f9f474","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.30.0-rc.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-rc.1/bin/linux/amd64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.30.0-rc.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-rc.1/bin/linux/amd64/kubectl.sha256 Dst:/home/jenkins/minikube-integration/18610-5679/.minikube/cache/linux/amd64/v1.30.0-rc.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x47f2020 0x47f2020 0x47f2020 0x47f2020 0x47f2020 0x47f2020 0x47f2020] Decompressors:map[bz2:0xc0006c1ab0 gz:0xc0006c1ab8 tar:0xc0006c19c0 tar.bz2:0xc0006c1a10 tar.gz:0xc0006c1a20 tar.xz:0xc0006c1a30 tar.zst:0xc0006c1a50 tbz2:0xc0006c1a10 tgz:0xc0006
c1a20 txz:0xc0006c1a30 tzst:0xc0006c1a50 xz:0xc0006c1ac0 zip:0xc0006c1ad0 zst:0xc0006c1ac8] Getters:map[file:0xc0026ae690 http:0xc0007b21e0 https:0xc0007b2230] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.3:60796-\u003e151.101.193.55:443: read: connection reset by peer","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"519ac1e7-589e-48c1-89ac-3a30079ac9c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0410 21:28:59.105873   13537 out.go:291] Setting OutFile to fd 1 ...
	I0410 21:28:59.106128   13537 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 21:28:59.106139   13537 out.go:304] Setting ErrFile to fd 2...
	I0410 21:28:59.106143   13537 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 21:28:59.106326   13537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 21:28:59.106903   13537 out.go:298] Setting JSON to true
	I0410 21:28:59.107657   13537 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":681,"bootTime":1712783858,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0410 21:28:59.107721   13537 start.go:139] virtualization: kvm guest
	I0410 21:28:59.109930   13537 out.go:97] [download-only-753930] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0410 21:28:59.111740   13537 out.go:169] MINIKUBE_LOCATION=18610
	I0410 21:28:59.110122   13537 notify.go:220] Checking for updates...
	I0410 21:28:59.114782   13537 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0410 21:28:59.116422   13537 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 21:28:59.117941   13537 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 21:28:59.119425   13537 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0410 21:28:59.122102   13537 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0410 21:28:59.122345   13537 driver.go:392] Setting default libvirt URI to qemu:///system
	I0410 21:28:59.156800   13537 out.go:97] Using the kvm2 driver based on user configuration
	I0410 21:28:59.156823   13537 start.go:297] selected driver: kvm2
	I0410 21:28:59.156830   13537 start.go:901] validating driver "kvm2" against <nil>
	I0410 21:28:59.157191   13537 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 21:28:59.157269   13537 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18610-5679/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0410 21:28:59.171808   13537 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0410 21:28:59.171864   13537 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0410 21:28:59.172351   13537 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0410 21:28:59.172567   13537 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0410 21:28:59.172626   13537 cni.go:84] Creating CNI manager for ""
	I0410 21:28:59.172643   13537 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 21:28:59.172651   13537 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0410 21:28:59.172704   13537 start.go:340] cluster config:
	{Name:download-only-753930 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.1 ClusterName:download-only-753930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 21:28:59.172791   13537 iso.go:125] acquiring lock: {Name:mk5a268a8a4dbf02629446a098c1de60574dce10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 21:28:59.174625   13537 out.go:97] Starting "download-only-753930" primary control-plane node in "download-only-753930" cluster
	I0410 21:28:59.174636   13537 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.1 and runtime crio
	I0410 21:28:59.264520   13537 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.1/preloaded-images-k8s-v18-v1.30.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I0410 21:28:59.264551   13537 cache.go:56] Caching tarball of preloaded images
	I0410 21:28:59.264719   13537 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.1 and runtime crio
	I0410 21:28:59.266729   13537 out.go:97] Downloading Kubernetes v1.30.0-rc.1 preload ...
	I0410 21:28:59.266743   13537 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-rc.1-cri-o-overlay-amd64.tar.lz4 ...
	I0410 21:28:59.361788   13537 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.1/preloaded-images-k8s-v18-v1.30.0-rc.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:87f68ecc43ec0a2c6db951923ee9e281 -> /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I0410 21:29:10.164393   13537 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-rc.1-cri-o-overlay-amd64.tar.lz4 ...
	I0410 21:29:10.164507   13537 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.1-cri-o-overlay-amd64.tar.lz4 ...
	I0410 21:29:10.924223   13537 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.1 on crio
	I0410 21:29:10.924561   13537 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/download-only-753930/config.json ...
	I0410 21:29:10.924590   13537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/download-only-753930/config.json: {Name:mkc35d51ea85e287a7e67ae59e8773c90c3218b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 21:29:10.924738   13537 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.1 and runtime crio
	I0410 21:29:10.924903   13537 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0-rc.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-rc.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18610-5679/.minikube/cache/linux/amd64/v1.30.0-rc.1/kubectl
	I0410 21:29:28.247329   13537 out.go:169] 
	W0410 21:29:28.248860   13537 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.30.0-rc.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-rc.1/bin/linux/amd64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.30.0-rc.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-rc.1/bin/linux/amd64/kubectl.sha256 Dst:/home/jenkins/minikube-integration/18610-5679/.minikube/cache/linux/amd64/v1.30.0-rc.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x47f2020 0x47f2020 0x47f2020 0x47f2020 0x47f2020 0x47f2020 0x47f2020] Decompressors:map[bz2:0xc0006c1ab0 gz:0xc0006c1ab8 tar:0xc0006c19c0 tar.bz2:0xc0006c1a10 tar.gz:0xc0006c1a20 tar.xz:0xc0006c1a30 tar.zst:0xc0006c1a50 tbz2:0xc0006c1a10 tgz:0xc0006c1a20 txz:0xc0006c1a30 tzst:0xc0006c1a50 xz:0xc0006c1ac0 zip:0xc0006c1ad0 zst:0xc0006c1ac8] Getters:map[file:0xc0026ae690 http:0xc0007b21e0 https:0xc0007b2230] Dir:false ProgressListener:<nil> I
nsecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.3:60796->151.101.193.55:443: read: connection reset by peer
	W0410 21:29:28.248871   13537 out_reason.go:110] 
	W0410 21:29:28.251330   13537 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0410 21:29:28.252641   13537 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-753930" "--force" "--alsologtostderr" "--kubernetes-version=v1.30.0-rc.1" "--container-runtime=crio" "--driver=kvm2" "" "--container-runtime=crio"] exit status 40
--- FAIL: TestDownloadOnly/v1.30.0-rc.1/json-events (29.21s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (156.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-577364 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-577364 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-577364 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d06609ad-b7cc-4be0-8572-437ef14d80dd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d06609ad-b7cc-4be0-8572-437ef14d80dd] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.003910215s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-577364 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-577364 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.216001111s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-577364 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-577364 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.209
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-577364 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-577364 addons disable ingress-dns --alsologtostderr -v=1: (2.045747037s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-577364 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-577364 addons disable ingress --alsologtostderr -v=1: (7.878792427s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-577364 -n addons-577364
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-577364 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-577364 logs -n 25: (1.367125265s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p download-only-753930                                                                     | download-only-753930 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:29 UTC | 10 Apr 24 21:29 UTC |
	| delete  | -p download-only-543401                                                                     | download-only-543401 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:29 UTC | 10 Apr 24 21:29 UTC |
	| delete  | -p download-only-765356                                                                     | download-only-765356 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:29 UTC | 10 Apr 24 21:29 UTC |
	| delete  | -p download-only-753930                                                                     | download-only-753930 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:29 UTC | 10 Apr 24 21:29 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-689773 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:29 UTC |                     |
	|         | binary-mirror-689773                                                                        |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |                |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |                |                     |                     |
	|         | http://127.0.0.1:43159                                                                      |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |                |                     |                     |
	| delete  | -p binary-mirror-689773                                                                     | binary-mirror-689773 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:29 UTC | 10 Apr 24 21:29 UTC |
	| addons  | disable dashboard -p                                                                        | addons-577364        | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:29 UTC |                     |
	|         | addons-577364                                                                               |                      |         |                |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-577364        | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:29 UTC |                     |
	|         | addons-577364                                                                               |                      |         |                |                     |                     |
	| start   | -p addons-577364 --wait=true                                                                | addons-577364        | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:29 UTC | 10 Apr 24 21:31 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |                |                     |                     |
	|         | --addons=registry                                                                           |                      |         |                |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |                |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |                |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |                |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |                |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |                |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |                |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |                |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-577364        | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:32 UTC | 10 Apr 24 21:32 UTC |
	|         | addons-577364                                                                               |                      |         |                |                     |                     |
	| ssh     | addons-577364 ssh cat                                                                       | addons-577364        | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:32 UTC | 10 Apr 24 21:32 UTC |
	|         | /opt/local-path-provisioner/pvc-d17dbfe7-521e-40fb-b0d3-e5165151a7dc_default_test-pvc/file1 |                      |         |                |                     |                     |
	| addons  | addons-577364 addons disable                                                                | addons-577364        | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:32 UTC | 10 Apr 24 21:32 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | addons-577364 addons disable                                                                | addons-577364        | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:32 UTC | 10 Apr 24 21:32 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | addons-577364 addons                                                                        | addons-577364        | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:32 UTC | 10 Apr 24 21:32 UTC |
	|         | disable metrics-server                                                                      |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| ip      | addons-577364 ip                                                                            | addons-577364        | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:32 UTC | 10 Apr 24 21:32 UTC |
	| addons  | disable inspektor-gadget -p                                                                 | addons-577364        | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:32 UTC | 10 Apr 24 21:32 UTC |
	|         | addons-577364                                                                               |                      |         |                |                     |                     |
	| addons  | enable headlamp                                                                             | addons-577364        | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:32 UTC | 10 Apr 24 21:32 UTC |
	|         | -p addons-577364                                                                            |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| ssh     | addons-577364 ssh curl -s                                                                   | addons-577364        | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:32 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |                |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |                |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-577364        | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:32 UTC | 10 Apr 24 21:32 UTC |
	|         | -p addons-577364                                                                            |                      |         |                |                     |                     |
	| addons  | addons-577364 addons disable                                                                | addons-577364        | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:33 UTC | 10 Apr 24 21:33 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | addons-577364 addons                                                                        | addons-577364        | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:33 UTC | 10 Apr 24 21:33 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | addons-577364 addons                                                                        | addons-577364        | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:33 UTC | 10 Apr 24 21:33 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| ip      | addons-577364 ip                                                                            | addons-577364        | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:34 UTC | 10 Apr 24 21:34 UTC |
	| addons  | addons-577364 addons disable                                                                | addons-577364        | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:34 UTC | 10 Apr 24 21:34 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | addons-577364 addons disable                                                                | addons-577364        | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:34 UTC | 10 Apr 24 21:34 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |                |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/10 21:29:29
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0410 21:29:29.705801   14058 out.go:291] Setting OutFile to fd 1 ...
	I0410 21:29:29.706088   14058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 21:29:29.706098   14058 out.go:304] Setting ErrFile to fd 2...
	I0410 21:29:29.706102   14058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 21:29:29.706321   14058 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 21:29:29.707504   14058 out.go:298] Setting JSON to false
	I0410 21:29:29.708623   14058 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":712,"bootTime":1712783858,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0410 21:29:29.708691   14058 start.go:139] virtualization: kvm guest
	I0410 21:29:29.710803   14058 out.go:177] * [addons-577364] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0410 21:29:29.712813   14058 out.go:177]   - MINIKUBE_LOCATION=18610
	I0410 21:29:29.712774   14058 notify.go:220] Checking for updates...
	I0410 21:29:29.714514   14058 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0410 21:29:29.716108   14058 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 21:29:29.717572   14058 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 21:29:29.719177   14058 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0410 21:29:29.720701   14058 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0410 21:29:29.722468   14058 driver.go:392] Setting default libvirt URI to qemu:///system
	I0410 21:29:29.754722   14058 out.go:177] * Using the kvm2 driver based on user configuration
	I0410 21:29:29.755928   14058 start.go:297] selected driver: kvm2
	I0410 21:29:29.755943   14058 start.go:901] validating driver "kvm2" against <nil>
	I0410 21:29:29.755953   14058 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0410 21:29:29.756702   14058 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 21:29:29.756765   14058 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18610-5679/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0410 21:29:29.771411   14058 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0410 21:29:29.771474   14058 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0410 21:29:29.771677   14058 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 21:29:29.771740   14058 cni.go:84] Creating CNI manager for ""
	I0410 21:29:29.771753   14058 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 21:29:29.771759   14058 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0410 21:29:29.771815   14058 start.go:340] cluster config:
	{Name:addons-577364 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-577364 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 21:29:29.771900   14058 iso.go:125] acquiring lock: {Name:mk5a268a8a4dbf02629446a098c1de60574dce10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 21:29:29.773710   14058 out.go:177] * Starting "addons-577364" primary control-plane node in "addons-577364" cluster
	I0410 21:29:29.775099   14058 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 21:29:29.775125   14058 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0410 21:29:29.775131   14058 cache.go:56] Caching tarball of preloaded images
	I0410 21:29:29.775217   14058 preload.go:173] Found /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0410 21:29:29.775229   14058 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0410 21:29:29.775505   14058 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/config.json ...
	I0410 21:29:29.775524   14058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/config.json: {Name:mkd3d087b1751734603d5de362ba9a4e36f29758 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 21:29:29.775647   14058 start.go:360] acquireMachinesLock for addons-577364: {Name:mkcfcfa8b01b63726303cd37d33f01d452118635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0410 21:29:29.775690   14058 start.go:364] duration metric: took 29.72µs to acquireMachinesLock for "addons-577364"
	I0410 21:29:29.775708   14058 start.go:93] Provisioning new machine with config: &{Name:addons-577364 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:addons-577364 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0410 21:29:29.775760   14058 start.go:125] createHost starting for "" (driver="kvm2")
	I0410 21:29:29.777236   14058 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0410 21:29:29.777360   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:29:29.777394   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:29:29.791339   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39881
	I0410 21:29:29.791705   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:29:29.792252   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:29:29.792270   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:29:29.792671   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:29:29.792902   14058 main.go:141] libmachine: (addons-577364) Calling .GetMachineName
	I0410 21:29:29.793089   14058 main.go:141] libmachine: (addons-577364) Calling .DriverName
	I0410 21:29:29.793227   14058 start.go:159] libmachine.API.Create for "addons-577364" (driver="kvm2")
	I0410 21:29:29.793257   14058 client.go:168] LocalClient.Create starting
	I0410 21:29:29.793308   14058 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem
	I0410 21:29:29.933119   14058 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem
	I0410 21:29:30.075452   14058 main.go:141] libmachine: Running pre-create checks...
	I0410 21:29:30.075476   14058 main.go:141] libmachine: (addons-577364) Calling .PreCreateCheck
	I0410 21:29:30.076013   14058 main.go:141] libmachine: (addons-577364) Calling .GetConfigRaw
	I0410 21:29:30.076420   14058 main.go:141] libmachine: Creating machine...
	I0410 21:29:30.076433   14058 main.go:141] libmachine: (addons-577364) Calling .Create
	I0410 21:29:30.076611   14058 main.go:141] libmachine: (addons-577364) Creating KVM machine...
	I0410 21:29:30.077877   14058 main.go:141] libmachine: (addons-577364) DBG | found existing default KVM network
	I0410 21:29:30.078600   14058 main.go:141] libmachine: (addons-577364) DBG | I0410 21:29:30.078448   14080 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0410 21:29:30.078627   14058 main.go:141] libmachine: (addons-577364) DBG | created network xml: 
	I0410 21:29:30.078657   14058 main.go:141] libmachine: (addons-577364) DBG | <network>
	I0410 21:29:30.078674   14058 main.go:141] libmachine: (addons-577364) DBG |   <name>mk-addons-577364</name>
	I0410 21:29:30.078684   14058 main.go:141] libmachine: (addons-577364) DBG |   <dns enable='no'/>
	I0410 21:29:30.078689   14058 main.go:141] libmachine: (addons-577364) DBG |   
	I0410 21:29:30.078696   14058 main.go:141] libmachine: (addons-577364) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0410 21:29:30.078701   14058 main.go:141] libmachine: (addons-577364) DBG |     <dhcp>
	I0410 21:29:30.078708   14058 main.go:141] libmachine: (addons-577364) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0410 21:29:30.078713   14058 main.go:141] libmachine: (addons-577364) DBG |     </dhcp>
	I0410 21:29:30.078721   14058 main.go:141] libmachine: (addons-577364) DBG |   </ip>
	I0410 21:29:30.078732   14058 main.go:141] libmachine: (addons-577364) DBG |   
	I0410 21:29:30.078741   14058 main.go:141] libmachine: (addons-577364) DBG | </network>
	I0410 21:29:30.078753   14058 main.go:141] libmachine: (addons-577364) DBG | 
	I0410 21:29:30.083907   14058 main.go:141] libmachine: (addons-577364) DBG | trying to create private KVM network mk-addons-577364 192.168.39.0/24...
	I0410 21:29:30.146644   14058 main.go:141] libmachine: (addons-577364) DBG | private KVM network mk-addons-577364 192.168.39.0/24 created
	I0410 21:29:30.146674   14058 main.go:141] libmachine: (addons-577364) DBG | I0410 21:29:30.146572   14080 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 21:29:30.146682   14058 main.go:141] libmachine: (addons-577364) Setting up store path in /home/jenkins/minikube-integration/18610-5679/.minikube/machines/addons-577364 ...
	I0410 21:29:30.146703   14058 main.go:141] libmachine: (addons-577364) Building disk image from file:///home/jenkins/minikube-integration/18610-5679/.minikube/cache/iso/amd64/minikube-v1.33.0-1712743565-18610-amd64.iso
	I0410 21:29:30.146714   14058 main.go:141] libmachine: (addons-577364) Downloading /home/jenkins/minikube-integration/18610-5679/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18610-5679/.minikube/cache/iso/amd64/minikube-v1.33.0-1712743565-18610-amd64.iso...
	I0410 21:29:30.386712   14058 main.go:141] libmachine: (addons-577364) DBG | I0410 21:29:30.386571   14080 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/addons-577364/id_rsa...
	I0410 21:29:30.619280   14058 main.go:141] libmachine: (addons-577364) DBG | I0410 21:29:30.619132   14080 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/addons-577364/addons-577364.rawdisk...
	I0410 21:29:30.619309   14058 main.go:141] libmachine: (addons-577364) DBG | Writing magic tar header
	I0410 21:29:30.619319   14058 main.go:141] libmachine: (addons-577364) DBG | Writing SSH key tar header
	I0410 21:29:30.619327   14058 main.go:141] libmachine: (addons-577364) DBG | I0410 21:29:30.619238   14080 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18610-5679/.minikube/machines/addons-577364 ...
	I0410 21:29:30.619344   14058 main.go:141] libmachine: (addons-577364) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/addons-577364
	I0410 21:29:30.619436   14058 main.go:141] libmachine: (addons-577364) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18610-5679/.minikube/machines
	I0410 21:29:30.619456   14058 main.go:141] libmachine: (addons-577364) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 21:29:30.619465   14058 main.go:141] libmachine: (addons-577364) Setting executable bit set on /home/jenkins/minikube-integration/18610-5679/.minikube/machines/addons-577364 (perms=drwx------)
	I0410 21:29:30.619475   14058 main.go:141] libmachine: (addons-577364) Setting executable bit set on /home/jenkins/minikube-integration/18610-5679/.minikube/machines (perms=drwxr-xr-x)
	I0410 21:29:30.619481   14058 main.go:141] libmachine: (addons-577364) Setting executable bit set on /home/jenkins/minikube-integration/18610-5679/.minikube (perms=drwxr-xr-x)
	I0410 21:29:30.619492   14058 main.go:141] libmachine: (addons-577364) Setting executable bit set on /home/jenkins/minikube-integration/18610-5679 (perms=drwxrwxr-x)
	I0410 21:29:30.619510   14058 main.go:141] libmachine: (addons-577364) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0410 21:29:30.619520   14058 main.go:141] libmachine: (addons-577364) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18610-5679
	I0410 21:29:30.619532   14058 main.go:141] libmachine: (addons-577364) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0410 21:29:30.619540   14058 main.go:141] libmachine: (addons-577364) DBG | Checking permissions on dir: /home/jenkins
	I0410 21:29:30.619549   14058 main.go:141] libmachine: (addons-577364) DBG | Checking permissions on dir: /home
	I0410 21:29:30.619557   14058 main.go:141] libmachine: (addons-577364) DBG | Skipping /home - not owner
	I0410 21:29:30.619567   14058 main.go:141] libmachine: (addons-577364) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0410 21:29:30.619573   14058 main.go:141] libmachine: (addons-577364) Creating domain...
	I0410 21:29:30.620751   14058 main.go:141] libmachine: (addons-577364) define libvirt domain using xml: 
	I0410 21:29:30.620778   14058 main.go:141] libmachine: (addons-577364) <domain type='kvm'>
	I0410 21:29:30.620804   14058 main.go:141] libmachine: (addons-577364)   <name>addons-577364</name>
	I0410 21:29:30.620833   14058 main.go:141] libmachine: (addons-577364)   <memory unit='MiB'>4000</memory>
	I0410 21:29:30.620847   14058 main.go:141] libmachine: (addons-577364)   <vcpu>2</vcpu>
	I0410 21:29:30.620857   14058 main.go:141] libmachine: (addons-577364)   <features>
	I0410 21:29:30.620863   14058 main.go:141] libmachine: (addons-577364)     <acpi/>
	I0410 21:29:30.620870   14058 main.go:141] libmachine: (addons-577364)     <apic/>
	I0410 21:29:30.620876   14058 main.go:141] libmachine: (addons-577364)     <pae/>
	I0410 21:29:30.620886   14058 main.go:141] libmachine: (addons-577364)     
	I0410 21:29:30.620899   14058 main.go:141] libmachine: (addons-577364)   </features>
	I0410 21:29:30.620910   14058 main.go:141] libmachine: (addons-577364)   <cpu mode='host-passthrough'>
	I0410 21:29:30.620932   14058 main.go:141] libmachine: (addons-577364)   
	I0410 21:29:30.620957   14058 main.go:141] libmachine: (addons-577364)   </cpu>
	I0410 21:29:30.620967   14058 main.go:141] libmachine: (addons-577364)   <os>
	I0410 21:29:30.620978   14058 main.go:141] libmachine: (addons-577364)     <type>hvm</type>
	I0410 21:29:30.620991   14058 main.go:141] libmachine: (addons-577364)     <boot dev='cdrom'/>
	I0410 21:29:30.621002   14058 main.go:141] libmachine: (addons-577364)     <boot dev='hd'/>
	I0410 21:29:30.621020   14058 main.go:141] libmachine: (addons-577364)     <bootmenu enable='no'/>
	I0410 21:29:30.621031   14058 main.go:141] libmachine: (addons-577364)   </os>
	I0410 21:29:30.621049   14058 main.go:141] libmachine: (addons-577364)   <devices>
	I0410 21:29:30.621070   14058 main.go:141] libmachine: (addons-577364)     <disk type='file' device='cdrom'>
	I0410 21:29:30.621092   14058 main.go:141] libmachine: (addons-577364)       <source file='/home/jenkins/minikube-integration/18610-5679/.minikube/machines/addons-577364/boot2docker.iso'/>
	I0410 21:29:30.621101   14058 main.go:141] libmachine: (addons-577364)       <target dev='hdc' bus='scsi'/>
	I0410 21:29:30.621111   14058 main.go:141] libmachine: (addons-577364)       <readonly/>
	I0410 21:29:30.621119   14058 main.go:141] libmachine: (addons-577364)     </disk>
	I0410 21:29:30.621135   14058 main.go:141] libmachine: (addons-577364)     <disk type='file' device='disk'>
	I0410 21:29:30.621149   14058 main.go:141] libmachine: (addons-577364)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0410 21:29:30.621161   14058 main.go:141] libmachine: (addons-577364)       <source file='/home/jenkins/minikube-integration/18610-5679/.minikube/machines/addons-577364/addons-577364.rawdisk'/>
	I0410 21:29:30.621170   14058 main.go:141] libmachine: (addons-577364)       <target dev='hda' bus='virtio'/>
	I0410 21:29:30.621175   14058 main.go:141] libmachine: (addons-577364)     </disk>
	I0410 21:29:30.621184   14058 main.go:141] libmachine: (addons-577364)     <interface type='network'>
	I0410 21:29:30.621193   14058 main.go:141] libmachine: (addons-577364)       <source network='mk-addons-577364'/>
	I0410 21:29:30.621202   14058 main.go:141] libmachine: (addons-577364)       <model type='virtio'/>
	I0410 21:29:30.621210   14058 main.go:141] libmachine: (addons-577364)     </interface>
	I0410 21:29:30.621222   14058 main.go:141] libmachine: (addons-577364)     <interface type='network'>
	I0410 21:29:30.621238   14058 main.go:141] libmachine: (addons-577364)       <source network='default'/>
	I0410 21:29:30.621249   14058 main.go:141] libmachine: (addons-577364)       <model type='virtio'/>
	I0410 21:29:30.621260   14058 main.go:141] libmachine: (addons-577364)     </interface>
	I0410 21:29:30.621272   14058 main.go:141] libmachine: (addons-577364)     <serial type='pty'>
	I0410 21:29:30.621285   14058 main.go:141] libmachine: (addons-577364)       <target port='0'/>
	I0410 21:29:30.621295   14058 main.go:141] libmachine: (addons-577364)     </serial>
	I0410 21:29:30.621308   14058 main.go:141] libmachine: (addons-577364)     <console type='pty'>
	I0410 21:29:30.621320   14058 main.go:141] libmachine: (addons-577364)       <target type='serial' port='0'/>
	I0410 21:29:30.621330   14058 main.go:141] libmachine: (addons-577364)     </console>
	I0410 21:29:30.621345   14058 main.go:141] libmachine: (addons-577364)     <rng model='virtio'>
	I0410 21:29:30.621360   14058 main.go:141] libmachine: (addons-577364)       <backend model='random'>/dev/random</backend>
	I0410 21:29:30.621370   14058 main.go:141] libmachine: (addons-577364)     </rng>
	I0410 21:29:30.621374   14058 main.go:141] libmachine: (addons-577364)     
	I0410 21:29:30.621386   14058 main.go:141] libmachine: (addons-577364)     
	I0410 21:29:30.621394   14058 main.go:141] libmachine: (addons-577364)   </devices>
	I0410 21:29:30.621399   14058 main.go:141] libmachine: (addons-577364) </domain>
	I0410 21:29:30.621407   14058 main.go:141] libmachine: (addons-577364) 
	I0410 21:29:30.627683   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:1f:68:cb in network default
	I0410 21:29:30.628236   14058 main.go:141] libmachine: (addons-577364) Ensuring networks are active...
	I0410 21:29:30.628248   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:30.628964   14058 main.go:141] libmachine: (addons-577364) Ensuring network default is active
	I0410 21:29:30.629242   14058 main.go:141] libmachine: (addons-577364) Ensuring network mk-addons-577364 is active
	I0410 21:29:30.630605   14058 main.go:141] libmachine: (addons-577364) Getting domain xml...
	I0410 21:29:30.631222   14058 main.go:141] libmachine: (addons-577364) Creating domain...
	I0410 21:29:32.010693   14058 main.go:141] libmachine: (addons-577364) Waiting to get IP...
	I0410 21:29:32.011469   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:32.011874   14058 main.go:141] libmachine: (addons-577364) DBG | unable to find current IP address of domain addons-577364 in network mk-addons-577364
	I0410 21:29:32.011890   14058 main.go:141] libmachine: (addons-577364) DBG | I0410 21:29:32.011842   14080 retry.go:31] will retry after 274.789758ms: waiting for machine to come up
	I0410 21:29:32.288318   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:32.288792   14058 main.go:141] libmachine: (addons-577364) DBG | unable to find current IP address of domain addons-577364 in network mk-addons-577364
	I0410 21:29:32.288818   14058 main.go:141] libmachine: (addons-577364) DBG | I0410 21:29:32.288774   14080 retry.go:31] will retry after 387.20544ms: waiting for machine to come up
	I0410 21:29:32.677015   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:32.677393   14058 main.go:141] libmachine: (addons-577364) DBG | unable to find current IP address of domain addons-577364 in network mk-addons-577364
	I0410 21:29:32.677425   14058 main.go:141] libmachine: (addons-577364) DBG | I0410 21:29:32.677339   14080 retry.go:31] will retry after 299.263142ms: waiting for machine to come up
	I0410 21:29:32.977698   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:32.978069   14058 main.go:141] libmachine: (addons-577364) DBG | unable to find current IP address of domain addons-577364 in network mk-addons-577364
	I0410 21:29:32.978090   14058 main.go:141] libmachine: (addons-577364) DBG | I0410 21:29:32.978023   14080 retry.go:31] will retry after 515.746678ms: waiting for machine to come up
	I0410 21:29:33.495652   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:33.496092   14058 main.go:141] libmachine: (addons-577364) DBG | unable to find current IP address of domain addons-577364 in network mk-addons-577364
	I0410 21:29:33.496121   14058 main.go:141] libmachine: (addons-577364) DBG | I0410 21:29:33.496055   14080 retry.go:31] will retry after 586.966944ms: waiting for machine to come up
	I0410 21:29:34.085041   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:34.085530   14058 main.go:141] libmachine: (addons-577364) DBG | unable to find current IP address of domain addons-577364 in network mk-addons-577364
	I0410 21:29:34.085553   14058 main.go:141] libmachine: (addons-577364) DBG | I0410 21:29:34.085500   14080 retry.go:31] will retry after 870.305437ms: waiting for machine to come up
	I0410 21:29:34.957134   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:34.957551   14058 main.go:141] libmachine: (addons-577364) DBG | unable to find current IP address of domain addons-577364 in network mk-addons-577364
	I0410 21:29:34.957574   14058 main.go:141] libmachine: (addons-577364) DBG | I0410 21:29:34.957495   14080 retry.go:31] will retry after 1.063646141s: waiting for machine to come up
	I0410 21:29:36.022751   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:36.023235   14058 main.go:141] libmachine: (addons-577364) DBG | unable to find current IP address of domain addons-577364 in network mk-addons-577364
	I0410 21:29:36.023295   14058 main.go:141] libmachine: (addons-577364) DBG | I0410 21:29:36.023184   14080 retry.go:31] will retry after 1.058572577s: waiting for machine to come up
	I0410 21:29:37.083309   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:37.083766   14058 main.go:141] libmachine: (addons-577364) DBG | unable to find current IP address of domain addons-577364 in network mk-addons-577364
	I0410 21:29:37.083791   14058 main.go:141] libmachine: (addons-577364) DBG | I0410 21:29:37.083713   14080 retry.go:31] will retry after 1.566962977s: waiting for machine to come up
	I0410 21:29:38.652487   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:38.652885   14058 main.go:141] libmachine: (addons-577364) DBG | unable to find current IP address of domain addons-577364 in network mk-addons-577364
	I0410 21:29:38.652913   14058 main.go:141] libmachine: (addons-577364) DBG | I0410 21:29:38.652835   14080 retry.go:31] will retry after 1.964026678s: waiting for machine to come up
	I0410 21:29:40.618037   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:40.618501   14058 main.go:141] libmachine: (addons-577364) DBG | unable to find current IP address of domain addons-577364 in network mk-addons-577364
	I0410 21:29:40.618531   14058 main.go:141] libmachine: (addons-577364) DBG | I0410 21:29:40.618468   14080 retry.go:31] will retry after 1.826685587s: waiting for machine to come up
	I0410 21:29:42.447475   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:42.447943   14058 main.go:141] libmachine: (addons-577364) DBG | unable to find current IP address of domain addons-577364 in network mk-addons-577364
	I0410 21:29:42.447973   14058 main.go:141] libmachine: (addons-577364) DBG | I0410 21:29:42.447894   14080 retry.go:31] will retry after 2.319980376s: waiting for machine to come up
	I0410 21:29:44.768995   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:44.769410   14058 main.go:141] libmachine: (addons-577364) DBG | unable to find current IP address of domain addons-577364 in network mk-addons-577364
	I0410 21:29:44.769433   14058 main.go:141] libmachine: (addons-577364) DBG | I0410 21:29:44.769374   14080 retry.go:31] will retry after 3.609117616s: waiting for machine to come up
	I0410 21:29:48.383099   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:48.383549   14058 main.go:141] libmachine: (addons-577364) DBG | unable to find current IP address of domain addons-577364 in network mk-addons-577364
	I0410 21:29:48.383574   14058 main.go:141] libmachine: (addons-577364) DBG | I0410 21:29:48.383498   14080 retry.go:31] will retry after 4.902595002s: waiting for machine to come up
	I0410 21:29:53.290426   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:53.290890   14058 main.go:141] libmachine: (addons-577364) Found IP for machine: 192.168.39.209
	I0410 21:29:53.290915   14058 main.go:141] libmachine: (addons-577364) Reserving static IP address...
	I0410 21:29:53.290931   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has current primary IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:53.291287   14058 main.go:141] libmachine: (addons-577364) DBG | unable to find host DHCP lease matching {name: "addons-577364", mac: "52:54:00:b9:30:7c", ip: "192.168.39.209"} in network mk-addons-577364
	I0410 21:29:53.362728   14058 main.go:141] libmachine: (addons-577364) DBG | Getting to WaitForSSH function...
	I0410 21:29:53.362755   14058 main.go:141] libmachine: (addons-577364) Reserved static IP address: 192.168.39.209
	I0410 21:29:53.362765   14058 main.go:141] libmachine: (addons-577364) Waiting for SSH to be available...
	I0410 21:29:53.365400   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:53.365840   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b9:30:7c}
	I0410 21:29:53.365874   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:53.366089   14058 main.go:141] libmachine: (addons-577364) DBG | Using SSH client type: external
	I0410 21:29:53.366114   14058 main.go:141] libmachine: (addons-577364) DBG | Using SSH private key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/addons-577364/id_rsa (-rw-------)
	I0410 21:29:53.366149   14058 main.go:141] libmachine: (addons-577364) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.209 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18610-5679/.minikube/machines/addons-577364/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0410 21:29:53.366168   14058 main.go:141] libmachine: (addons-577364) DBG | About to run SSH command:
	I0410 21:29:53.366184   14058 main.go:141] libmachine: (addons-577364) DBG | exit 0
	I0410 21:29:53.500523   14058 main.go:141] libmachine: (addons-577364) DBG | SSH cmd err, output: <nil>: 
	I0410 21:29:53.500835   14058 main.go:141] libmachine: (addons-577364) KVM machine creation complete!
	I0410 21:29:53.501144   14058 main.go:141] libmachine: (addons-577364) Calling .GetConfigRaw
	I0410 21:29:53.501667   14058 main.go:141] libmachine: (addons-577364) Calling .DriverName
	I0410 21:29:53.501875   14058 main.go:141] libmachine: (addons-577364) Calling .DriverName
	I0410 21:29:53.502015   14058 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0410 21:29:53.502026   14058 main.go:141] libmachine: (addons-577364) Calling .GetState
	I0410 21:29:53.503394   14058 main.go:141] libmachine: Detecting operating system of created instance...
	I0410 21:29:53.503412   14058 main.go:141] libmachine: Waiting for SSH to be available...
	I0410 21:29:53.503420   14058 main.go:141] libmachine: Getting to WaitForSSH function...
	I0410 21:29:53.503429   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHHostname
	I0410 21:29:53.505982   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:53.506486   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-577364 Clientid:01:52:54:00:b9:30:7c}
	I0410 21:29:53.506511   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:53.506674   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHPort
	I0410 21:29:53.506838   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHKeyPath
	I0410 21:29:53.506971   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHKeyPath
	I0410 21:29:53.507090   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHUsername
	I0410 21:29:53.507198   14058 main.go:141] libmachine: Using SSH client type: native
	I0410 21:29:53.507376   14058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0410 21:29:53.507386   14058 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0410 21:29:53.619898   14058 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 21:29:53.619925   14058 main.go:141] libmachine: Detecting the provisioner...
	I0410 21:29:53.619935   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHHostname
	I0410 21:29:53.622707   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:53.623038   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-577364 Clientid:01:52:54:00:b9:30:7c}
	I0410 21:29:53.623067   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:53.623189   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHPort
	I0410 21:29:53.623388   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHKeyPath
	I0410 21:29:53.623545   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHKeyPath
	I0410 21:29:53.623688   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHUsername
	I0410 21:29:53.623949   14058 main.go:141] libmachine: Using SSH client type: native
	I0410 21:29:53.624169   14058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0410 21:29:53.624186   14058 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0410 21:29:53.737422   14058 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0410 21:29:53.737492   14058 main.go:141] libmachine: found compatible host: buildroot
	I0410 21:29:53.737499   14058 main.go:141] libmachine: Provisioning with buildroot...
	I0410 21:29:53.737506   14058 main.go:141] libmachine: (addons-577364) Calling .GetMachineName
	I0410 21:29:53.737742   14058 buildroot.go:166] provisioning hostname "addons-577364"
	I0410 21:29:53.737767   14058 main.go:141] libmachine: (addons-577364) Calling .GetMachineName
	I0410 21:29:53.737988   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHHostname
	I0410 21:29:53.740564   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:53.740987   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-577364 Clientid:01:52:54:00:b9:30:7c}
	I0410 21:29:53.741013   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:53.741179   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHPort
	I0410 21:29:53.741354   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHKeyPath
	I0410 21:29:53.741489   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHKeyPath
	I0410 21:29:53.741586   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHUsername
	I0410 21:29:53.741732   14058 main.go:141] libmachine: Using SSH client type: native
	I0410 21:29:53.741899   14058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0410 21:29:53.741912   14058 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-577364 && echo "addons-577364" | sudo tee /etc/hostname
	I0410 21:29:53.867525   14058 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-577364
	
	I0410 21:29:53.867554   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHHostname
	I0410 21:29:53.870072   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:53.870369   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-577364 Clientid:01:52:54:00:b9:30:7c}
	I0410 21:29:53.870403   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:53.870598   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHPort
	I0410 21:29:53.870812   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHKeyPath
	I0410 21:29:53.870982   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHKeyPath
	I0410 21:29:53.871134   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHUsername
	I0410 21:29:53.871306   14058 main.go:141] libmachine: Using SSH client type: native
	I0410 21:29:53.871473   14058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0410 21:29:53.871489   14058 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-577364' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-577364/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-577364' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 21:29:53.989716   14058 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 21:29:53.989742   14058 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 21:29:53.989779   14058 buildroot.go:174] setting up certificates
	I0410 21:29:53.989791   14058 provision.go:84] configureAuth start
	I0410 21:29:53.989804   14058 main.go:141] libmachine: (addons-577364) Calling .GetMachineName
	I0410 21:29:53.990074   14058 main.go:141] libmachine: (addons-577364) Calling .GetIP
	I0410 21:29:53.992332   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:53.992676   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-577364 Clientid:01:52:54:00:b9:30:7c}
	I0410 21:29:53.992709   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:53.992805   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHHostname
	I0410 21:29:53.994979   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:53.995310   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-577364 Clientid:01:52:54:00:b9:30:7c}
	I0410 21:29:53.995327   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:53.995485   14058 provision.go:143] copyHostCerts
	I0410 21:29:53.995550   14058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 21:29:53.995694   14058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 21:29:53.995769   14058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 21:29:53.995856   14058 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.addons-577364 san=[127.0.0.1 192.168.39.209 addons-577364 localhost minikube]
	I0410 21:29:54.126807   14058 provision.go:177] copyRemoteCerts
	I0410 21:29:54.126871   14058 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 21:29:54.126892   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHHostname
	I0410 21:29:54.129285   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:54.129616   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-577364 Clientid:01:52:54:00:b9:30:7c}
	I0410 21:29:54.129642   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:54.129799   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHPort
	I0410 21:29:54.129993   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHKeyPath
	I0410 21:29:54.130135   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHUsername
	I0410 21:29:54.130254   14058 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/addons-577364/id_rsa Username:docker}
	I0410 21:29:54.215801   14058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 21:29:54.243352   14058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0410 21:29:54.270992   14058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0410 21:29:54.298327   14058 provision.go:87] duration metric: took 308.521125ms to configureAuth
	I0410 21:29:54.298363   14058 buildroot.go:189] setting minikube options for container-runtime
	I0410 21:29:54.298570   14058 config.go:182] Loaded profile config "addons-577364": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 21:29:54.298641   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHHostname
	I0410 21:29:54.301357   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:54.302041   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-577364 Clientid:01:52:54:00:b9:30:7c}
	I0410 21:29:54.302075   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:54.302282   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHPort
	I0410 21:29:54.302455   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHKeyPath
	I0410 21:29:54.302666   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHKeyPath
	I0410 21:29:54.302805   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHUsername
	I0410 21:29:54.302985   14058 main.go:141] libmachine: Using SSH client type: native
	I0410 21:29:54.303206   14058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0410 21:29:54.303228   14058 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 21:29:54.577863   14058 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 21:29:54.577889   14058 main.go:141] libmachine: Checking connection to Docker...
	I0410 21:29:54.577914   14058 main.go:141] libmachine: (addons-577364) Calling .GetURL
	I0410 21:29:54.579230   14058 main.go:141] libmachine: (addons-577364) DBG | Using libvirt version 6000000
	I0410 21:29:54.581415   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:54.581717   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-577364 Clientid:01:52:54:00:b9:30:7c}
	I0410 21:29:54.581746   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:54.581877   14058 main.go:141] libmachine: Docker is up and running!
	I0410 21:29:54.581898   14058 main.go:141] libmachine: Reticulating splines...
	I0410 21:29:54.581908   14058 client.go:171] duration metric: took 24.788639532s to LocalClient.Create
	I0410 21:29:54.581940   14058 start.go:167] duration metric: took 24.788712873s to libmachine.API.Create "addons-577364"
	I0410 21:29:54.581971   14058 start.go:293] postStartSetup for "addons-577364" (driver="kvm2")
	I0410 21:29:54.581983   14058 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 21:29:54.582007   14058 main.go:141] libmachine: (addons-577364) Calling .DriverName
	I0410 21:29:54.582351   14058 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 21:29:54.582377   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHHostname
	I0410 21:29:54.584302   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:54.584626   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-577364 Clientid:01:52:54:00:b9:30:7c}
	I0410 21:29:54.584653   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:54.584794   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHPort
	I0410 21:29:54.584972   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHKeyPath
	I0410 21:29:54.585114   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHUsername
	I0410 21:29:54.585217   14058 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/addons-577364/id_rsa Username:docker}
	I0410 21:29:54.671368   14058 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 21:29:54.676004   14058 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 21:29:54.676029   14058 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 21:29:54.676102   14058 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 21:29:54.676135   14058 start.go:296] duration metric: took 94.156545ms for postStartSetup
	I0410 21:29:54.676165   14058 main.go:141] libmachine: (addons-577364) Calling .GetConfigRaw
	I0410 21:29:54.676699   14058 main.go:141] libmachine: (addons-577364) Calling .GetIP
	I0410 21:29:54.679508   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:54.679833   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-577364 Clientid:01:52:54:00:b9:30:7c}
	I0410 21:29:54.679862   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:54.680151   14058 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/config.json ...
	I0410 21:29:54.680300   14058 start.go:128] duration metric: took 24.904531169s to createHost
	I0410 21:29:54.680321   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHHostname
	I0410 21:29:54.682664   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:54.682964   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-577364 Clientid:01:52:54:00:b9:30:7c}
	I0410 21:29:54.682983   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:54.683107   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHPort
	I0410 21:29:54.683277   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHKeyPath
	I0410 21:29:54.683418   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHKeyPath
	I0410 21:29:54.683568   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHUsername
	I0410 21:29:54.683694   14058 main.go:141] libmachine: Using SSH client type: native
	I0410 21:29:54.683837   14058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0410 21:29:54.683848   14058 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 21:29:54.797903   14058 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712784594.771260501
	
	I0410 21:29:54.797924   14058 fix.go:216] guest clock: 1712784594.771260501
	I0410 21:29:54.797930   14058 fix.go:229] Guest: 2024-04-10 21:29:54.771260501 +0000 UTC Remote: 2024-04-10 21:29:54.680311778 +0000 UTC m=+25.018555949 (delta=90.948723ms)
	I0410 21:29:54.797948   14058 fix.go:200] guest clock delta is within tolerance: 90.948723ms
	I0410 21:29:54.797967   14058 start.go:83] releasing machines lock for "addons-577364", held for 25.022252081s
	I0410 21:29:54.797983   14058 main.go:141] libmachine: (addons-577364) Calling .DriverName
	I0410 21:29:54.798171   14058 main.go:141] libmachine: (addons-577364) Calling .GetIP
	I0410 21:29:54.800931   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:54.801273   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-577364 Clientid:01:52:54:00:b9:30:7c}
	I0410 21:29:54.801305   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:54.801445   14058 main.go:141] libmachine: (addons-577364) Calling .DriverName
	I0410 21:29:54.801993   14058 main.go:141] libmachine: (addons-577364) Calling .DriverName
	I0410 21:29:54.802156   14058 main.go:141] libmachine: (addons-577364) Calling .DriverName
	I0410 21:29:54.802245   14058 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 21:29:54.802283   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHHostname
	I0410 21:29:54.802381   14058 ssh_runner.go:195] Run: cat /version.json
	I0410 21:29:54.802402   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHHostname
	I0410 21:29:54.804923   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:54.805217   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-577364 Clientid:01:52:54:00:b9:30:7c}
	I0410 21:29:54.805236   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:54.805314   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:54.805389   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHPort
	I0410 21:29:54.805565   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHKeyPath
	I0410 21:29:54.805702   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHUsername
	I0410 21:29:54.805749   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-577364 Clientid:01:52:54:00:b9:30:7c}
	I0410 21:29:54.805775   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:54.805826   14058 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/addons-577364/id_rsa Username:docker}
	I0410 21:29:54.805954   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHPort
	I0410 21:29:54.806076   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHKeyPath
	I0410 21:29:54.806211   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHUsername
	I0410 21:29:54.806362   14058 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/addons-577364/id_rsa Username:docker}
	I0410 21:29:54.916903   14058 ssh_runner.go:195] Run: systemctl --version
	I0410 21:29:54.923273   14058 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 21:29:55.088981   14058 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 21:29:55.094957   14058 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 21:29:55.095034   14058 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 21:29:55.111500   14058 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0410 21:29:55.111543   14058 start.go:494] detecting cgroup driver to use...
	I0410 21:29:55.111595   14058 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 21:29:55.127231   14058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 21:29:55.141387   14058 docker.go:217] disabling cri-docker service (if available) ...
	I0410 21:29:55.141434   14058 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 21:29:55.155114   14058 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 21:29:55.169377   14058 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 21:29:55.295177   14058 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 21:29:55.450028   14058 docker.go:233] disabling docker service ...
	I0410 21:29:55.450098   14058 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 21:29:55.465637   14058 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 21:29:55.480363   14058 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 21:29:55.631610   14058 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 21:29:55.756713   14058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 21:29:55.771548   14058 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 21:29:55.791421   14058 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0410 21:29:55.791502   14058 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 21:29:55.802573   14058 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 21:29:55.802630   14058 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 21:29:55.814158   14058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 21:29:55.825142   14058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 21:29:55.836591   14058 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 21:29:55.848004   14058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 21:29:55.859569   14058 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 21:29:55.879171   14058 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 21:29:55.890667   14058 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 21:29:55.901121   14058 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0410 21:29:55.901192   14058 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0410 21:29:55.916440   14058 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 21:29:55.927256   14058 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 21:29:56.052649   14058 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 21:29:56.204920   14058 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 21:29:56.205029   14058 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 21:29:56.210056   14058 start.go:562] Will wait 60s for crictl version
	I0410 21:29:56.210142   14058 ssh_runner.go:195] Run: which crictl
	I0410 21:29:56.213924   14058 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 21:29:56.251933   14058 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 21:29:56.252049   14058 ssh_runner.go:195] Run: crio --version
	I0410 21:29:56.285042   14058 ssh_runner.go:195] Run: crio --version
	I0410 21:29:56.315280   14058 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0410 21:29:56.316571   14058 main.go:141] libmachine: (addons-577364) Calling .GetIP
	I0410 21:29:56.319148   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:56.319491   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-577364 Clientid:01:52:54:00:b9:30:7c}
	I0410 21:29:56.319521   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:29:56.319714   14058 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0410 21:29:56.324042   14058 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 21:29:56.336761   14058 kubeadm.go:877] updating cluster {Name:addons-577364 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:addons-577364 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 21:29:56.337089   14058 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 21:29:56.337183   14058 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 21:29:56.372891   14058 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0410 21:29:56.372969   14058 ssh_runner.go:195] Run: which lz4
	I0410 21:29:56.377180   14058 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0410 21:29:56.381615   14058 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0410 21:29:56.381655   14058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0410 21:29:57.799702   14058 crio.go:462] duration metric: took 1.422544057s to copy over tarball
	I0410 21:29:57.799789   14058 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0410 21:30:00.189651   14058 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.389829184s)
	I0410 21:30:00.189677   14058 crio.go:469] duration metric: took 2.38994214s to extract the tarball
	I0410 21:30:00.189684   14058 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0410 21:30:00.228251   14058 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 21:30:00.279006   14058 crio.go:514] all images are preloaded for cri-o runtime.
	I0410 21:30:00.279027   14058 cache_images.go:84] Images are preloaded, skipping loading
	I0410 21:30:00.279034   14058 kubeadm.go:928] updating node { 192.168.39.209 8443 v1.29.3 crio true true} ...
	I0410 21:30:00.279135   14058 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-577364 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.209
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:addons-577364 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 21:30:00.279198   14058 ssh_runner.go:195] Run: crio config
	I0410 21:30:00.332639   14058 cni.go:84] Creating CNI manager for ""
	I0410 21:30:00.332665   14058 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 21:30:00.332680   14058 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 21:30:00.332713   14058 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.209 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-577364 NodeName:addons-577364 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.209"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.209 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0410 21:30:00.332875   14058 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.209
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-577364"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.209
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.209"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 21:30:00.332942   14058 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0410 21:30:00.343905   14058 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 21:30:00.343987   14058 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 21:30:00.354048   14058 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0410 21:30:00.373273   14058 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0410 21:30:00.392756   14058 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0410 21:30:00.412757   14058 ssh_runner.go:195] Run: grep 192.168.39.209	control-plane.minikube.internal$ /etc/hosts
	I0410 21:30:00.416775   14058 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.209	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 21:30:00.429868   14058 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 21:30:00.570559   14058 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 21:30:00.591623   14058 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364 for IP: 192.168.39.209
	I0410 21:30:00.591649   14058 certs.go:194] generating shared ca certs ...
	I0410 21:30:00.591664   14058 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 21:30:00.591826   14058 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 21:30:00.777812   14058 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt ...
	I0410 21:30:00.777856   14058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt: {Name:mkc059deb6cc7493a1b19d75f3d16b57fdd42d10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 21:30:00.778033   14058 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key ...
	I0410 21:30:00.778046   14058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key: {Name:mk72e70bf4eb3e1c06bde6b071497d0e1cf6890e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 21:30:00.778124   14058 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 21:30:01.102551   14058 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt ...
	I0410 21:30:01.102577   14058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt: {Name:mke11a29a79544c1a587a5a9dd4ab3f3955305db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 21:30:01.102734   14058 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key ...
	I0410 21:30:01.102745   14058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key: {Name:mka56d32ae7edfcd14cbe1d84a67fcbbfa7b5c98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 21:30:01.102835   14058 certs.go:256] generating profile certs ...
	I0410 21:30:01.102901   14058 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.key
	I0410 21:30:01.102916   14058 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt with IP's: []
	I0410 21:30:01.156930   14058 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt ...
	I0410 21:30:01.156958   14058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: {Name:mk14e25722de44e88c7becf75077ea41852c7d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 21:30:01.157116   14058 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.key ...
	I0410 21:30:01.157126   14058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.key: {Name:mkda9548872b3d40859f983fa3eec3504ebf0f78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 21:30:01.157188   14058 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/apiserver.key.243d0bdb
	I0410 21:30:01.157206   14058 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/apiserver.crt.243d0bdb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.209]
	I0410 21:30:01.484945   14058 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/apiserver.crt.243d0bdb ...
	I0410 21:30:01.484979   14058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/apiserver.crt.243d0bdb: {Name:mk50549db4c8e9b13472119296f109f731427588 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 21:30:01.485161   14058 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/apiserver.key.243d0bdb ...
	I0410 21:30:01.485179   14058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/apiserver.key.243d0bdb: {Name:mk1dba3bb2a9eee906cc6fc0ba1dfb33279a6cdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 21:30:01.485277   14058 certs.go:381] copying /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/apiserver.crt.243d0bdb -> /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/apiserver.crt
	I0410 21:30:01.485348   14058 certs.go:385] copying /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/apiserver.key.243d0bdb -> /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/apiserver.key
	I0410 21:30:01.485397   14058 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/proxy-client.key
	I0410 21:30:01.485414   14058 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/proxy-client.crt with IP's: []
	I0410 21:30:01.637194   14058 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/proxy-client.crt ...
	I0410 21:30:01.637225   14058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/proxy-client.crt: {Name:mk8665412eac67c2fa753bedc2a7ded98ba65a24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 21:30:01.637417   14058 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/proxy-client.key ...
	I0410 21:30:01.637434   14058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/proxy-client.key: {Name:mkdc7dff11869a5ee33b30d194f25e56cdf14771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 21:30:01.637633   14058 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 21:30:01.637676   14058 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 21:30:01.637702   14058 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 21:30:01.637736   14058 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 21:30:01.638366   14058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 21:30:01.673765   14058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 21:30:01.704132   14058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 21:30:01.731618   14058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 21:30:01.758868   14058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0410 21:30:01.787249   14058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0410 21:30:01.816527   14058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 21:30:01.844915   14058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0410 21:30:01.873154   14058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 21:30:01.902157   14058 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 21:30:01.921976   14058 ssh_runner.go:195] Run: openssl version
	I0410 21:30:01.928701   14058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 21:30:01.942425   14058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 21:30:01.947701   14058 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 21:30:01.947773   14058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 21:30:01.954040   14058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 21:30:01.970295   14058 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 21:30:01.979662   14058 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0410 21:30:01.979724   14058 kubeadm.go:391] StartCluster: {Name:addons-577364 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 C
lusterName:addons-577364 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 21:30:01.979809   14058 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 21:30:01.979861   14058 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 21:30:02.029148   14058 cri.go:89] found id: ""
	I0410 21:30:02.029244   14058 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0410 21:30:02.044767   14058 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 21:30:02.058358   14058 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 21:30:02.069080   14058 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 21:30:02.069117   14058 kubeadm.go:156] found existing configuration files:
	
	I0410 21:30:02.069169   14058 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 21:30:02.078986   14058 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 21:30:02.079060   14058 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 21:30:02.089595   14058 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 21:30:02.099357   14058 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 21:30:02.099417   14058 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 21:30:02.109861   14058 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 21:30:02.119533   14058 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 21:30:02.119591   14058 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 21:30:02.129589   14058 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 21:30:02.139567   14058 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 21:30:02.139631   14058 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 21:30:02.149974   14058 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 21:30:02.203698   14058 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0410 21:30:02.203819   14058 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 21:30:02.338509   14058 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 21:30:02.338640   14058 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 21:30:02.338744   14058 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 21:30:02.561007   14058 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 21:30:02.612964   14058 out.go:204]   - Generating certificates and keys ...
	I0410 21:30:02.613118   14058 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 21:30:02.613223   14058 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 21:30:02.634533   14058 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0410 21:30:02.827979   14058 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0410 21:30:02.934429   14058 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0410 21:30:03.023344   14058 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0410 21:30:03.246853   14058 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0410 21:30:03.247085   14058 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-577364 localhost] and IPs [192.168.39.209 127.0.0.1 ::1]
	I0410 21:30:03.455832   14058 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0410 21:30:03.456148   14058 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-577364 localhost] and IPs [192.168.39.209 127.0.0.1 ::1]
	I0410 21:30:03.671796   14058 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0410 21:30:03.732957   14058 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0410 21:30:03.940237   14058 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0410 21:30:03.940535   14058 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 21:30:04.255699   14058 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 21:30:04.512886   14058 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0410 21:30:04.722770   14058 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 21:30:05.031771   14058 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 21:30:05.376965   14058 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 21:30:05.377636   14058 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 21:30:05.379909   14058 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 21:30:05.381741   14058 out.go:204]   - Booting up control plane ...
	I0410 21:30:05.381870   14058 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 21:30:05.381971   14058 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 21:30:05.382051   14058 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 21:30:05.399892   14058 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 21:30:05.403281   14058 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 21:30:05.403469   14058 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 21:30:05.543076   14058 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0410 21:30:11.040936   14058 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.501785 seconds
	I0410 21:30:11.055869   14058 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0410 21:30:11.073637   14058 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0410 21:30:11.611200   14058 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0410 21:30:11.611418   14058 kubeadm.go:309] [mark-control-plane] Marking the node addons-577364 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0410 21:30:12.127054   14058 kubeadm.go:309] [bootstrap-token] Using token: 3birxq.z054rqk0a8x7pza0
	I0410 21:30:12.128657   14058 out.go:204]   - Configuring RBAC rules ...
	I0410 21:30:12.128753   14058 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0410 21:30:12.135672   14058 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0410 21:30:12.145730   14058 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0410 21:30:12.150613   14058 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0410 21:30:12.157418   14058 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0410 21:30:12.160828   14058 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0410 21:30:12.174492   14058 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0410 21:30:12.428902   14058 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0410 21:30:12.552007   14058 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0410 21:30:12.552900   14058 kubeadm.go:309] 
	I0410 21:30:12.552990   14058 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0410 21:30:12.553002   14058 kubeadm.go:309] 
	I0410 21:30:12.553082   14058 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0410 21:30:12.553091   14058 kubeadm.go:309] 
	I0410 21:30:12.553126   14058 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0410 21:30:12.553209   14058 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0410 21:30:12.553297   14058 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0410 21:30:12.553309   14058 kubeadm.go:309] 
	I0410 21:30:12.553390   14058 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0410 21:30:12.553398   14058 kubeadm.go:309] 
	I0410 21:30:12.553438   14058 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0410 21:30:12.553446   14058 kubeadm.go:309] 
	I0410 21:30:12.553488   14058 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0410 21:30:12.553557   14058 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0410 21:30:12.553614   14058 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0410 21:30:12.553621   14058 kubeadm.go:309] 
	I0410 21:30:12.553691   14058 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0410 21:30:12.553812   14058 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0410 21:30:12.553828   14058 kubeadm.go:309] 
	I0410 21:30:12.553933   14058 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 3birxq.z054rqk0a8x7pza0 \
	I0410 21:30:12.554062   14058 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f23510b5ac28f8f42919becc6f4945068a05a3fcf79470ca514b0af3de7a2bb0 \
	I0410 21:30:12.554093   14058 kubeadm.go:309] 	--control-plane 
	I0410 21:30:12.554104   14058 kubeadm.go:309] 
	I0410 21:30:12.554206   14058 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0410 21:30:12.554219   14058 kubeadm.go:309] 
	I0410 21:30:12.554328   14058 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 3birxq.z054rqk0a8x7pza0 \
	I0410 21:30:12.554501   14058 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f23510b5ac28f8f42919becc6f4945068a05a3fcf79470ca514b0af3de7a2bb0 
	I0410 21:30:12.555265   14058 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 21:30:12.555283   14058 cni.go:84] Creating CNI manager for ""
	I0410 21:30:12.555291   14058 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 21:30:12.557546   14058 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0410 21:30:12.558697   14058 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0410 21:30:12.592500   14058 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0410 21:30:12.631828   14058 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0410 21:30:12.631949   14058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-577364 minikube.k8s.io/updated_at=2024_04_10T21_30_12_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2 minikube.k8s.io/name=addons-577364 minikube.k8s.io/primary=true
	I0410 21:30:12.632368   14058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 21:30:12.676410   14058 ops.go:34] apiserver oom_adj: -16
	I0410 21:30:12.829154   14058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 21:30:13.329884   14058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 21:30:13.829569   14058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 21:30:14.329671   14058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 21:30:14.829643   14058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 21:30:15.329200   14058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 21:30:15.829232   14058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 21:30:16.329914   14058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 21:30:16.830077   14058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 21:30:17.329605   14058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 21:30:17.829447   14058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 21:30:18.329791   14058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 21:30:18.829892   14058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 21:30:19.329244   14058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 21:30:19.830034   14058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 21:30:20.329823   14058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 21:30:20.829938   14058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 21:30:21.329489   14058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 21:30:21.829554   14058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 21:30:22.329912   14058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 21:30:22.829975   14058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 21:30:23.329512   14058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 21:30:23.829886   14058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 21:30:24.329327   14058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 21:30:24.829776   14058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 21:30:25.329521   14058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 21:30:25.829235   14058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 21:30:26.056845   14058 kubeadm.go:1107] duration metric: took 13.424516024s to wait for elevateKubeSystemPrivileges
	W0410 21:30:26.056891   14058 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0410 21:30:26.056901   14058 kubeadm.go:393] duration metric: took 24.077181427s to StartCluster
	I0410 21:30:26.056923   14058 settings.go:142] acquiring lock: {Name:mk5dc8e9a07a91433645b19ffba859d70c73be71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 21:30:26.057072   14058 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 21:30:26.057442   14058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/kubeconfig: {Name:mkd6f498bec10d7b0d2291f83f5e27766227bc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 21:30:26.058339   14058 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.209 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0410 21:30:26.060003   14058 out.go:177] * Verifying Kubernetes components...
	I0410 21:30:26.058394   14058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0410 21:30:26.058431   14058 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0410 21:30:26.058599   14058 config.go:182] Loaded profile config "addons-577364": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 21:30:26.061529   14058 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 21:30:26.061563   14058 addons.go:69] Setting cloud-spanner=true in profile "addons-577364"
	I0410 21:30:26.061587   14058 addons.go:69] Setting yakd=true in profile "addons-577364"
	I0410 21:30:26.061596   14058 addons.go:234] Setting addon cloud-spanner=true in "addons-577364"
	I0410 21:30:26.061598   14058 addons.go:69] Setting gcp-auth=true in profile "addons-577364"
	I0410 21:30:26.061613   14058 addons.go:234] Setting addon yakd=true in "addons-577364"
	I0410 21:30:26.061614   14058 addons.go:69] Setting ingress-dns=true in profile "addons-577364"
	I0410 21:30:26.061631   14058 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-577364"
	I0410 21:30:26.061631   14058 host.go:66] Checking if "addons-577364" exists ...
	I0410 21:30:26.061636   14058 addons.go:69] Setting metrics-server=true in profile "addons-577364"
	I0410 21:30:26.061639   14058 host.go:66] Checking if "addons-577364" exists ...
	I0410 21:30:26.061653   14058 addons.go:234] Setting addon ingress-dns=true in "addons-577364"
	I0410 21:30:26.061649   14058 addons.go:69] Setting registry=true in profile "addons-577364"
	I0410 21:30:26.061662   14058 addons.go:69] Setting inspektor-gadget=true in profile "addons-577364"
	I0410 21:30:26.061680   14058 addons.go:234] Setting addon inspektor-gadget=true in "addons-577364"
	I0410 21:30:26.061680   14058 addons.go:234] Setting addon registry=true in "addons-577364"
	I0410 21:30:26.061696   14058 host.go:66] Checking if "addons-577364" exists ...
	I0410 21:30:26.061711   14058 host.go:66] Checking if "addons-577364" exists ...
	I0410 21:30:26.061719   14058 addons.go:69] Setting helm-tiller=true in profile "addons-577364"
	I0410 21:30:26.061733   14058 host.go:66] Checking if "addons-577364" exists ...
	I0410 21:30:26.061747   14058 addons.go:234] Setting addon helm-tiller=true in "addons-577364"
	I0410 21:30:26.061773   14058 host.go:66] Checking if "addons-577364" exists ...
	I0410 21:30:26.062124   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:26.062141   14058 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-577364"
	I0410 21:30:26.062155   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:26.062161   14058 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-577364"
	I0410 21:30:26.062161   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:26.062173   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:26.062176   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:26.062181   14058 host.go:66] Checking if "addons-577364" exists ...
	I0410 21:30:26.062184   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:26.062193   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:26.062211   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:26.062232   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:26.062264   14058 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-577364"
	I0410 21:30:26.062271   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:26.062287   14058 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-577364"
	I0410 21:30:26.062298   14058 addons.go:69] Setting storage-provisioner=true in profile "addons-577364"
	I0410 21:30:26.062312   14058 addons.go:234] Setting addon storage-provisioner=true in "addons-577364"
	I0410 21:30:26.061625   14058 mustload.go:65] Loading cluster: addons-577364
	I0410 21:30:26.062130   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:26.062323   14058 addons.go:69] Setting volumesnapshots=true in profile "addons-577364"
	I0410 21:30:26.062344   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:26.062344   14058 addons.go:234] Setting addon volumesnapshots=true in "addons-577364"
	I0410 21:30:26.061655   14058 addons.go:234] Setting addon metrics-server=true in "addons-577364"
	I0410 21:30:26.062365   14058 addons.go:69] Setting default-storageclass=true in profile "addons-577364"
	I0410 21:30:26.062413   14058 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-577364"
	I0410 21:30:26.062353   14058 addons.go:69] Setting ingress=true in profile "addons-577364"
	I0410 21:30:26.062442   14058 addons.go:234] Setting addon ingress=true in "addons-577364"
	I0410 21:30:26.062500   14058 host.go:66] Checking if "addons-577364" exists ...
	I0410 21:30:26.062510   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:26.062548   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:26.062789   14058 config.go:182] Loaded profile config "addons-577364": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 21:30:26.062973   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:26.063012   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:26.063046   14058 host.go:66] Checking if "addons-577364" exists ...
	I0410 21:30:26.063090   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:26.063136   14058 host.go:66] Checking if "addons-577364" exists ...
	I0410 21:30:26.063435   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:26.063483   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:26.063499   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:26.063527   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:26.063547   14058 host.go:66] Checking if "addons-577364" exists ...
	I0410 21:30:26.063744   14058 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-577364"
	I0410 21:30:26.063798   14058 host.go:66] Checking if "addons-577364" exists ...
	I0410 21:30:26.063889   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:26.063926   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:26.063109   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:26.064053   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:26.063054   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:26.063143   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:26.063021   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:26.085132   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38381
	I0410 21:30:26.085628   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39801
	I0410 21:30:26.085812   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:26.086053   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:26.086472   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:26.086511   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:26.086628   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:26.086650   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:26.086899   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:26.087043   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33661
	I0410 21:30:26.087606   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:26.087629   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:26.087860   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:26.087868   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:26.088389   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:26.088434   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:26.088518   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:26.088598   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:26.088867   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:26.089456   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:26.089507   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:26.091894   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39321
	I0410 21:30:26.092340   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:26.092854   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:26.092886   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:26.093227   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:26.095838   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39267
	I0410 21:30:26.100852   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:26.100911   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:26.100861   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:26.101112   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:26.101432   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:26.101530   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44839
	I0410 21:30:26.102002   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:26.102028   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:26.102301   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:26.102527   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39761
	I0410 21:30:26.102661   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:26.102940   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:26.103267   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:26.103301   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:26.103335   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:26.103351   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:26.103678   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:26.103816   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:26.103826   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:26.104214   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:26.104240   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:26.104472   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:26.105030   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:26.105068   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:26.110582   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37919
	I0410 21:30:26.111287   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:26.111884   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:26.111902   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:26.112323   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:26.113030   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:26.113060   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:26.123090   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35123
	I0410 21:30:26.123644   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:26.124170   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:26.124199   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:26.124787   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:26.125422   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:26.125468   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:26.127645   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32817
	I0410 21:30:26.128076   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:26.128625   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:26.128642   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:26.128988   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:26.129538   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:26.129563   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:26.134707   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39701
	I0410 21:30:26.134727   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45949
	I0410 21:30:26.134890   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40009
	I0410 21:30:26.135110   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:26.135209   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:26.135270   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:26.135587   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:26.135602   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:26.135731   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:26.135741   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:26.135847   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:26.135856   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:26.135957   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:26.136128   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:26.136180   14058 main.go:141] libmachine: (addons-577364) Calling .GetState
	I0410 21:30:26.136450   14058 main.go:141] libmachine: (addons-577364) Calling .GetState
	I0410 21:30:26.136513   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:26.137928   14058 main.go:141] libmachine: (addons-577364) Calling .GetState
	I0410 21:30:26.138929   14058 main.go:141] libmachine: (addons-577364) Calling .DriverName
	I0410 21:30:26.139198   14058 main.go:141] libmachine: (addons-577364) Calling .DriverName
	I0410 21:30:26.141781   14058 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0410 21:30:26.140863   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36303
	I0410 21:30:26.141019   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44103
	I0410 21:30:26.142201   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43021
	I0410 21:30:26.145165   14058 out.go:177]   - Using image docker.io/registry:2.8.3
	I0410 21:30:26.143870   14058 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0410 21:30:26.144500   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:26.144500   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:26.144564   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:26.146863   14058 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0410 21:30:26.146877   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0410 21:30:26.146899   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHHostname
	I0410 21:30:26.147246   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:26.148710   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:26.148738   14058 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0410 21:30:26.148751   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0410 21:30:26.147495   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:26.148770   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHHostname
	I0410 21:30:26.148774   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:26.147838   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:26.148812   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:26.149485   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:26.149540   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:26.149640   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:26.149648   14058 main.go:141] libmachine: (addons-577364) Calling .GetState
	I0410 21:30:26.149846   14058 main.go:141] libmachine: (addons-577364) Calling .GetState
	I0410 21:30:26.150013   14058 main.go:141] libmachine: (addons-577364) Calling .GetState
	I0410 21:30:26.150933   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40235
	I0410 21:30:26.151758   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:26.152043   14058 main.go:141] libmachine: (addons-577364) Calling .DriverName
	I0410 21:30:26.153996   14058 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0410 21:30:26.152461   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:30:26.152586   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:26.153256   14058 main.go:141] libmachine: (addons-577364) Calling .DriverName
	I0410 21:30:26.153437   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHPort
	I0410 21:30:26.153601   14058 main.go:141] libmachine: (addons-577364) Calling .DriverName
	I0410 21:30:26.153698   14058 main.go:141] libmachine: (addons-577364) Calling .DriverName
	I0410 21:30:26.154101   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:30:26.154697   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHPort
	I0410 21:30:26.155568   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-577364 Clientid:01:52:54:00:b9:30:7c}
	I0410 21:30:26.155597   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:30:26.155608   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:26.155643   14058 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0410 21:30:26.155653   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0410 21:30:26.155667   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHHostname
	I0410 21:30:26.155729   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-577364 Clientid:01:52:54:00:b9:30:7c}
	I0410 21:30:26.155755   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:30:26.157494   14058 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0410 21:30:26.158924   14058 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0410 21:30:26.156580   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:26.158927   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:30:26.160467   14058 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0410 21:30:26.156645   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHKeyPath
	I0410 21:30:26.156665   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHKeyPath
	I0410 21:30:26.158208   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43207
	I0410 21:30:26.158942   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0410 21:30:26.159309   14058 main.go:141] libmachine: (addons-577364) Calling .GetState
	I0410 21:30:26.160101   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-577364 Clientid:01:52:54:00:b9:30:7c}
	I0410 21:30:26.160266   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHPort
	I0410 21:30:26.162480   14058 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0410 21:30:26.162534   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0410 21:30:26.162555   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHHostname
	I0410 21:30:26.162514   14058 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0410 21:30:26.162610   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:30:26.164155   14058 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0410 21:30:26.164168   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0410 21:30:26.164185   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHHostname
	I0410 21:30:26.162619   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHHostname
	I0410 21:30:26.162856   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHUsername
	I0410 21:30:26.163063   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHUsername
	I0410 21:30:26.163425   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:26.163802   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHKeyPath
	I0410 21:30:26.164477   14058 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/addons-577364/id_rsa Username:docker}
	I0410 21:30:26.164648   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHUsername
	I0410 21:30:26.164704   14058 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/addons-577364/id_rsa Username:docker}
	I0410 21:30:26.165069   14058 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/addons-577364/id_rsa Username:docker}
	I0410 21:30:26.165514   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:26.165532   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:26.166099   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40141
	I0410 21:30:26.166429   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:26.166622   14058 main.go:141] libmachine: (addons-577364) Calling .GetState
	I0410 21:30:26.167027   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:26.167788   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:30:26.168019   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:30:26.168181   14058 main.go:141] libmachine: (addons-577364) Calling .DriverName
	I0410 21:30:26.168258   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-577364 Clientid:01:52:54:00:b9:30:7c}
	I0410 21:30:26.168281   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:30:26.168505   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHPort
	I0410 21:30:26.168566   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-577364 Clientid:01:52:54:00:b9:30:7c}
	I0410 21:30:26.168584   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:30:26.168621   14058 main.go:141] libmachine: (addons-577364) Calling .DriverName
	I0410 21:30:26.170520   14058 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0410 21:30:26.169053   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35947
	I0410 21:30:26.169087   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHKeyPath
	I0410 21:30:26.169093   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHPort
	I0410 21:30:26.169616   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:26.169792   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:30:26.170304   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHPort
	I0410 21:30:26.171506   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46525
	I0410 21:30:26.171736   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40609
	I0410 21:30:26.172193   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:26.172358   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36675
	I0410 21:30:26.173183   14058 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0410 21:30:26.173218   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-577364 Clientid:01:52:54:00:b9:30:7c}
	I0410 21:30:26.173299   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:26.173442   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHKeyPath
	I0410 21:30:26.173462   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHUsername
	I0410 21:30:26.173550   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHKeyPath
	I0410 21:30:26.173698   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:26.176138   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:26.176156   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:26.176175   14058 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0410 21:30:26.176189   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0410 21:30:26.176206   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHHostname
	I0410 21:30:26.174807   14058 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0410 21:30:26.174883   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:30:26.175512   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHUsername
	I0410 21:30:26.175516   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:26.175519   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHUsername
	I0410 21:30:26.175544   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:26.175536   14058 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/addons-577364/id_rsa Username:docker}
	I0410 21:30:26.175782   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:26.176468   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:26.176527   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:26.180129   14058 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0410 21:30:26.177890   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:26.178116   14058 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/addons-577364/id_rsa Username:docker}
	I0410 21:30:26.178216   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:26.178238   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:26.178245   14058 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/addons-577364/id_rsa Username:docker}
	I0410 21:30:26.178587   14058 main.go:141] libmachine: (addons-577364) Calling .GetState
	I0410 21:30:26.178803   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:26.180105   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:30:26.180780   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHPort
	I0410 21:30:26.181567   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:26.181585   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:26.181643   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:26.181853   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-577364 Clientid:01:52:54:00:b9:30:7c}
	I0410 21:30:26.181875   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:30:26.181904   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:26.181976   14058 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0410 21:30:26.181987   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0410 21:30:26.182001   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHHostname
	I0410 21:30:26.182030   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHKeyPath
	I0410 21:30:26.182058   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:26.182254   14058 main.go:141] libmachine: (addons-577364) Calling .GetState
	I0410 21:30:26.182260   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHUsername
	I0410 21:30:26.182306   14058 main.go:141] libmachine: (addons-577364) Calling .GetState
	I0410 21:30:26.182314   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:26.182445   14058 main.go:141] libmachine: (addons-577364) Calling .GetState
	I0410 21:30:26.182449   14058 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/addons-577364/id_rsa Username:docker}
	I0410 21:30:26.184241   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44329
	I0410 21:30:26.184895   14058 host.go:66] Checking if "addons-577364" exists ...
	I0410 21:30:26.185190   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:26.185211   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:26.185544   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:26.185763   14058 main.go:141] libmachine: (addons-577364) Calling .DriverName
	I0410 21:30:26.187854   14058 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 21:30:26.189570   14058 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 21:30:26.188007   14058 addons.go:234] Setting addon default-storageclass=true in "addons-577364"
	I0410 21:30:26.189635   14058 host.go:66] Checking if "addons-577364" exists ...
	I0410 21:30:26.190060   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:26.190099   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:26.186436   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:30:26.190175   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-577364 Clientid:01:52:54:00:b9:30:7c}
	I0410 21:30:26.190206   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:30:26.186152   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:26.190220   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:26.188023   14058 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-577364"
	I0410 21:30:26.190298   14058 host.go:66] Checking if "addons-577364" exists ...
	I0410 21:30:26.188460   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35345
	I0410 21:30:26.188483   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHPort
	I0410 21:30:26.189593   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0410 21:30:26.190522   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHHostname
	I0410 21:30:26.190627   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:26.190664   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:26.190782   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHKeyPath
	I0410 21:30:26.190917   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHUsername
	I0410 21:30:26.191037   14058 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/addons-577364/id_rsa Username:docker}
	I0410 21:30:26.193748   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:30:26.194157   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-577364 Clientid:01:52:54:00:b9:30:7c}
	I0410 21:30:26.194179   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:30:26.194472   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:26.194571   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHPort
	I0410 21:30:26.195229   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:26.195248   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:26.195271   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:26.195294   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHKeyPath
	I0410 21:30:26.195626   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:26.195628   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHUsername
	I0410 21:30:26.195864   14058 main.go:141] libmachine: (addons-577364) Calling .GetState
	I0410 21:30:26.195998   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:26.196037   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:26.196349   14058 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/addons-577364/id_rsa Username:docker}
	I0410 21:30:26.197440   14058 main.go:141] libmachine: (addons-577364) Calling .DriverName
	I0410 21:30:26.199650   14058 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0410 21:30:26.201130   14058 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0410 21:30:26.201146   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0410 21:30:26.201163   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHHostname
	I0410 21:30:26.203921   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:30:26.204233   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-577364 Clientid:01:52:54:00:b9:30:7c}
	I0410 21:30:26.204252   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:30:26.204472   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHPort
	I0410 21:30:26.204675   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHKeyPath
	I0410 21:30:26.204854   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHUsername
	I0410 21:30:26.204995   14058 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/addons-577364/id_rsa Username:docker}
	I0410 21:30:26.206712   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36307
	I0410 21:30:26.207693   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:26.208275   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:26.208291   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:26.208594   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38651
	I0410 21:30:26.208850   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:26.209040   14058 main.go:141] libmachine: (addons-577364) Calling .DriverName
	I0410 21:30:26.210794   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45021
	I0410 21:30:26.211121   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:26.212033   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:26.212062   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:26.212245   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:26.212487   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:26.213044   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:26.213078   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:26.219494   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33615
	I0410 21:30:26.219887   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:26.219968   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:26.219983   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:26.220322   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:26.220486   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:26.220499   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:26.220552   14058 main.go:141] libmachine: (addons-577364) Calling .GetState
	I0410 21:30:26.220871   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:26.221091   14058 main.go:141] libmachine: (addons-577364) Calling .GetState
	I0410 21:30:26.222316   14058 main.go:141] libmachine: (addons-577364) Calling .DriverName
	I0410 21:30:26.224703   14058 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0410 21:30:26.222706   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43951
	I0410 21:30:26.223138   14058 main.go:141] libmachine: (addons-577364) Calling .DriverName
	I0410 21:30:26.225490   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:26.226653   14058 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0410 21:30:26.228253   14058 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0410 21:30:26.226806   14058 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0410 21:30:26.227727   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:26.229610   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:26.231334   14058 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0410 21:30:26.231351   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0410 21:30:26.231369   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHHostname
	I0410 21:30:26.232660   14058 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0410 21:30:26.234057   14058 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0410 21:30:26.232735   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44995
	I0410 21:30:26.229894   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:26.234855   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:30:26.236600   14058 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0410 21:30:26.237794   14058 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0410 21:30:26.235694   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-577364 Clientid:01:52:54:00:b9:30:7c}
	I0410 21:30:26.235829   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHPort
	I0410 21:30:26.235903   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:26.236069   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:26.239092   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:30:26.239122   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:26.240522   14058 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0410 21:30:26.239261   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHKeyPath
	I0410 21:30:26.239699   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:26.242019   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:26.242031   14058 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0410 21:30:26.242053   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0410 21:30:26.242075   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHHostname
	I0410 21:30:26.242257   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHUsername
	I0410 21:30:26.242428   14058 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/addons-577364/id_rsa Username:docker}
	I0410 21:30:26.242643   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:26.242959   14058 main.go:141] libmachine: (addons-577364) Calling .GetState
	I0410 21:30:26.244499   14058 main.go:141] libmachine: (addons-577364) Calling .DriverName
	I0410 21:30:26.244840   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:30:26.246337   14058 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0410 21:30:26.245252   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-577364 Clientid:01:52:54:00:b9:30:7c}
	W0410 21:30:26.245461   14058 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:58012->192.168.39.209:22: read: connection reset by peer
	I0410 21:30:26.245514   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHPort
	I0410 21:30:26.247866   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:30:26.247881   14058 retry.go:31] will retry after 313.000261ms: ssh: handshake failed: read tcp 192.168.39.1:58012->192.168.39.209:22: read: connection reset by peer
	I0410 21:30:26.249563   14058 out.go:177]   - Using image docker.io/busybox:stable
	I0410 21:30:26.248128   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHKeyPath
	I0410 21:30:26.251423   14058 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0410 21:30:26.251443   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0410 21:30:26.251456   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHHostname
	I0410 21:30:26.251507   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHUsername
	I0410 21:30:26.251682   14058 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/addons-577364/id_rsa Username:docker}
	I0410 21:30:26.254300   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:30:26.254680   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-577364 Clientid:01:52:54:00:b9:30:7c}
	I0410 21:30:26.254725   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:30:26.254843   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHPort
	I0410 21:30:26.255024   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHKeyPath
	I0410 21:30:26.255237   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHUsername
	I0410 21:30:26.255403   14058 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/addons-577364/id_rsa Username:docker}
	I0410 21:30:26.258085   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39663
	I0410 21:30:26.258541   14058 main.go:141] libmachine: () Calling .GetVersion
	W0410 21:30:26.258785   14058 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:58022->192.168.39.209:22: read: connection reset by peer
	I0410 21:30:26.258806   14058 retry.go:31] will retry after 144.736625ms: ssh: handshake failed: read tcp 192.168.39.1:58022->192.168.39.209:22: read: connection reset by peer
	I0410 21:30:26.259116   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:26.259141   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:26.259443   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:26.259616   14058 main.go:141] libmachine: (addons-577364) Calling .GetState
	I0410 21:30:26.261036   14058 main.go:141] libmachine: (addons-577364) Calling .DriverName
	I0410 21:30:26.261303   14058 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0410 21:30:26.261316   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0410 21:30:26.261328   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHHostname
	I0410 21:30:26.263400   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:30:26.263676   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-577364 Clientid:01:52:54:00:b9:30:7c}
	I0410 21:30:26.263714   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:30:26.263875   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHPort
	I0410 21:30:26.264033   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHKeyPath
	I0410 21:30:26.264202   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHUsername
	I0410 21:30:26.264343   14058 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/addons-577364/id_rsa Username:docker}
	I0410 21:30:26.345436   14058 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 21:30:26.499150   14058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0410 21:30:26.535547   14058 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0410 21:30:26.535569   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0410 21:30:26.551974   14058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0410 21:30:26.556106   14058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0410 21:30:26.572777   14058 node_ready.go:35] waiting up to 6m0s for node "addons-577364" to be "Ready" ...
	I0410 21:30:26.575770   14058 node_ready.go:49] node "addons-577364" has status "Ready":"True"
	I0410 21:30:26.575793   14058 node_ready.go:38] duration metric: took 2.980372ms for node "addons-577364" to be "Ready" ...
	I0410 21:30:26.575820   14058 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 21:30:26.583769   14058 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-5whqs" in "kube-system" namespace to be "Ready" ...
	I0410 21:30:26.648143   14058 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0410 21:30:26.648163   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0410 21:30:26.715825   14058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0410 21:30:26.725553   14058 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0410 21:30:26.725580   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0410 21:30:26.741898   14058 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0410 21:30:26.741926   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0410 21:30:26.757392   14058 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0410 21:30:26.757422   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0410 21:30:26.757836   14058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0410 21:30:26.763298   14058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0410 21:30:26.763698   14058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0410 21:30:26.770296   14058 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0410 21:30:26.770317   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0410 21:30:26.837648   14058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 21:30:26.885233   14058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0410 21:30:26.887906   14058 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0410 21:30:26.887924   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0410 21:30:26.906494   14058 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0410 21:30:26.906513   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0410 21:30:26.948550   14058 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0410 21:30:26.948578   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0410 21:30:27.000986   14058 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0410 21:30:27.001010   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0410 21:30:27.008826   14058 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0410 21:30:27.008846   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0410 21:30:27.154131   14058 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0410 21:30:27.154156   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0410 21:30:27.267561   14058 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0410 21:30:27.267587   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0410 21:30:27.270702   14058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0410 21:30:27.306991   14058 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0410 21:30:27.307012   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0410 21:30:27.308881   14058 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0410 21:30:27.308902   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0410 21:30:27.344226   14058 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0410 21:30:27.344253   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0410 21:30:27.439692   14058 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 21:30:27.439725   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0410 21:30:27.538603   14058 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0410 21:30:27.538625   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0410 21:30:27.540716   14058 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0410 21:30:27.540733   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0410 21:30:27.636830   14058 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0410 21:30:27.636859   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0410 21:30:27.686581   14058 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0410 21:30:27.686610   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0410 21:30:27.709229   14058 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0410 21:30:27.709251   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0410 21:30:27.751997   14058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 21:30:27.880458   14058 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0410 21:30:27.880494   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0410 21:30:27.882401   14058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0410 21:30:27.946751   14058 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0410 21:30:27.946776   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0410 21:30:27.996274   14058 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0410 21:30:27.996298   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0410 21:30:28.077525   14058 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0410 21:30:28.077547   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0410 21:30:28.278712   14058 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0410 21:30:28.278740   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0410 21:30:28.318109   14058 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0410 21:30:28.318152   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0410 21:30:28.362957   14058 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0410 21:30:28.362976   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0410 21:30:28.513242   14058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0410 21:30:28.529247   14058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0410 21:30:28.555240   14058 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0410 21:30:28.555274   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0410 21:30:28.595911   14058 pod_ready.go:102] pod "coredns-76f75df574-5whqs" in "kube-system" namespace has status "Ready":"False"
	I0410 21:30:28.733142   14058 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0410 21:30:28.733166   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0410 21:30:28.947977   14058 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0410 21:30:28.948004   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0410 21:30:29.120271   14058 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.621086206s)
	I0410 21:30:29.120306   14058 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0410 21:30:29.146016   14058 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0410 21:30:29.146037   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0410 21:30:29.294922   14058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0410 21:30:29.623597   14058 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-577364" context rescaled to 1 replicas
	I0410 21:30:29.745292   14058 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.193277023s)
	I0410 21:30:29.745332   14058 main.go:141] libmachine: Making call to close driver server
	I0410 21:30:29.745343   14058 main.go:141] libmachine: (addons-577364) Calling .Close
	I0410 21:30:29.745608   14058 main.go:141] libmachine: Successfully made call to close driver server
	I0410 21:30:29.745622   14058 main.go:141] libmachine: (addons-577364) DBG | Closing plugin on server side
	I0410 21:30:29.745632   14058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 21:30:29.745647   14058 main.go:141] libmachine: Making call to close driver server
	I0410 21:30:29.745655   14058 main.go:141] libmachine: (addons-577364) Calling .Close
	I0410 21:30:29.745903   14058 main.go:141] libmachine: (addons-577364) DBG | Closing plugin on server side
	I0410 21:30:29.745933   14058 main.go:141] libmachine: Successfully made call to close driver server
	I0410 21:30:29.745941   14058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 21:30:30.603891   14058 pod_ready.go:102] pod "coredns-76f75df574-5whqs" in "kube-system" namespace has status "Ready":"False"
	I0410 21:30:32.170498   14058 pod_ready.go:92] pod "coredns-76f75df574-5whqs" in "kube-system" namespace has status "Ready":"True"
	I0410 21:30:32.170521   14058 pod_ready.go:81] duration metric: took 5.586712783s for pod "coredns-76f75df574-5whqs" in "kube-system" namespace to be "Ready" ...
	I0410 21:30:32.170532   14058 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-jg22x" in "kube-system" namespace to be "Ready" ...
	I0410 21:30:33.030720   14058 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0410 21:30:33.030754   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHHostname
	I0410 21:30:33.033484   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:30:33.033854   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-577364 Clientid:01:52:54:00:b9:30:7c}
	I0410 21:30:33.033876   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:30:33.034062   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHPort
	I0410 21:30:33.034296   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHKeyPath
	I0410 21:30:33.034472   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHUsername
	I0410 21:30:33.034652   14058 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/addons-577364/id_rsa Username:docker}
	I0410 21:30:33.576524   14058 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0410 21:30:33.638795   14058 addons.go:234] Setting addon gcp-auth=true in "addons-577364"
	I0410 21:30:33.638849   14058 host.go:66] Checking if "addons-577364" exists ...
	I0410 21:30:33.639255   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:33.639290   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:33.656146   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35301
	I0410 21:30:33.656610   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:33.657131   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:33.657156   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:33.657454   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:33.658081   14058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:30:33.658129   14058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:30:33.675148   14058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37345
	I0410 21:30:33.676202   14058 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:30:33.676793   14058 main.go:141] libmachine: Using API Version  1
	I0410 21:30:33.676819   14058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:30:33.677151   14058 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:30:33.677457   14058 main.go:141] libmachine: (addons-577364) Calling .GetState
	I0410 21:30:33.679205   14058 main.go:141] libmachine: (addons-577364) Calling .DriverName
	I0410 21:30:33.679435   14058 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0410 21:30:33.679457   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHHostname
	I0410 21:30:33.682176   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:30:33.682571   14058 main.go:141] libmachine: (addons-577364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:30:7c", ip: ""} in network mk-addons-577364: {Iface:virbr1 ExpiryTime:2024-04-10 22:29:45 +0000 UTC Type:0 Mac:52:54:00:b9:30:7c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-577364 Clientid:01:52:54:00:b9:30:7c}
	I0410 21:30:33.682592   14058 main.go:141] libmachine: (addons-577364) DBG | domain addons-577364 has defined IP address 192.168.39.209 and MAC address 52:54:00:b9:30:7c in network mk-addons-577364
	I0410 21:30:33.682691   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHPort
	I0410 21:30:33.682867   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHKeyPath
	I0410 21:30:33.683025   14058 main.go:141] libmachine: (addons-577364) Calling .GetSSHUsername
	I0410 21:30:33.683191   14058 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/addons-577364/id_rsa Username:docker}
	I0410 21:30:34.180650   14058 pod_ready.go:102] pod "coredns-76f75df574-jg22x" in "kube-system" namespace has status "Ready":"False"
	I0410 21:30:35.795584   14058 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.239436489s)
	I0410 21:30:35.795621   14058 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.079761955s)
	I0410 21:30:35.795631   14058 main.go:141] libmachine: Making call to close driver server
	I0410 21:30:35.795645   14058 main.go:141] libmachine: (addons-577364) Calling .Close
	I0410 21:30:35.795655   14058 main.go:141] libmachine: Making call to close driver server
	I0410 21:30:35.795666   14058 main.go:141] libmachine: (addons-577364) Calling .Close
	I0410 21:30:35.795684   14058 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.037820635s)
	I0410 21:30:35.795718   14058 main.go:141] libmachine: Making call to close driver server
	I0410 21:30:35.795733   14058 main.go:141] libmachine: (addons-577364) Calling .Close
	I0410 21:30:35.795757   14058 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.032437515s)
	I0410 21:30:35.795774   14058 main.go:141] libmachine: Making call to close driver server
	I0410 21:30:35.795781   14058 main.go:141] libmachine: (addons-577364) Calling .Close
	I0410 21:30:35.795854   14058 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.032131983s)
	I0410 21:30:35.795896   14058 main.go:141] libmachine: Making call to close driver server
	I0410 21:30:35.795927   14058 main.go:141] libmachine: (addons-577364) Calling .Close
	I0410 21:30:35.796133   14058 main.go:141] libmachine: (addons-577364) DBG | Closing plugin on server side
	I0410 21:30:35.796154   14058 main.go:141] libmachine: (addons-577364) DBG | Closing plugin on server side
	I0410 21:30:35.796188   14058 main.go:141] libmachine: Successfully made call to close driver server
	I0410 21:30:35.796197   14058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 21:30:35.796206   14058 main.go:141] libmachine: Making call to close driver server
	I0410 21:30:35.796213   14058 main.go:141] libmachine: (addons-577364) Calling .Close
	I0410 21:30:35.796471   14058 main.go:141] libmachine: (addons-577364) DBG | Closing plugin on server side
	I0410 21:30:35.796504   14058 main.go:141] libmachine: Successfully made call to close driver server
	I0410 21:30:35.796512   14058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 21:30:35.796520   14058 main.go:141] libmachine: Making call to close driver server
	I0410 21:30:35.796528   14058 main.go:141] libmachine: (addons-577364) Calling .Close
	I0410 21:30:35.796865   14058 main.go:141] libmachine: Successfully made call to close driver server
	I0410 21:30:35.796886   14058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 21:30:35.796895   14058 main.go:141] libmachine: Making call to close driver server
	I0410 21:30:35.796902   14058 main.go:141] libmachine: (addons-577364) Calling .Close
	I0410 21:30:35.797864   14058 main.go:141] libmachine: (addons-577364) DBG | Closing plugin on server side
	I0410 21:30:35.797902   14058 main.go:141] libmachine: Successfully made call to close driver server
	I0410 21:30:35.797910   14058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 21:30:35.797918   14058 main.go:141] libmachine: Making call to close driver server
	I0410 21:30:35.797926   14058 main.go:141] libmachine: (addons-577364) Calling .Close
	I0410 21:30:35.798007   14058 main.go:141] libmachine: Successfully made call to close driver server
	I0410 21:30:35.798033   14058 main.go:141] libmachine: (addons-577364) DBG | Closing plugin on server side
	I0410 21:30:35.798034   14058 main.go:141] libmachine: (addons-577364) DBG | Closing plugin on server side
	I0410 21:30:35.798050   14058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 21:30:35.798062   14058 main.go:141] libmachine: Successfully made call to close driver server
	I0410 21:30:35.798072   14058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 21:30:35.798077   14058 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.960400845s)
	I0410 21:30:35.798097   14058 main.go:141] libmachine: Making call to close driver server
	I0410 21:30:35.798105   14058 main.go:141] libmachine: (addons-577364) Calling .Close
	I0410 21:30:35.798210   14058 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.91295068s)
	I0410 21:30:35.798227   14058 main.go:141] libmachine: Making call to close driver server
	I0410 21:30:35.798237   14058 main.go:141] libmachine: (addons-577364) Calling .Close
	I0410 21:30:35.798247   14058 main.go:141] libmachine: (addons-577364) DBG | Closing plugin on server side
	I0410 21:30:35.798270   14058 main.go:141] libmachine: Successfully made call to close driver server
	I0410 21:30:35.798276   14058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 21:30:35.798283   14058 main.go:141] libmachine: Making call to close driver server
	I0410 21:30:35.798290   14058 main.go:141] libmachine: (addons-577364) Calling .Close
	I0410 21:30:35.798830   14058 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.528098696s)
	I0410 21:30:35.798867   14058 main.go:141] libmachine: Making call to close driver server
	I0410 21:30:35.798887   14058 main.go:141] libmachine: (addons-577364) Calling .Close
	I0410 21:30:35.798909   14058 main.go:141] libmachine: Successfully made call to close driver server
	I0410 21:30:35.798878   14058 main.go:141] libmachine: (addons-577364) DBG | Closing plugin on server side
	I0410 21:30:35.798930   14058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 21:30:35.798991   14058 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.046966836s)
	I0410 21:30:35.799005   14058 main.go:141] libmachine: Making call to close driver server
	I0410 21:30:35.799010   14058 main.go:141] libmachine: (addons-577364) DBG | Closing plugin on server side
	I0410 21:30:35.799011   14058 main.go:141] libmachine: (addons-577364) Calling .Close
	I0410 21:30:35.799059   14058 main.go:141] libmachine: (addons-577364) DBG | Closing plugin on server side
	I0410 21:30:35.799057   14058 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.916633856s)
	I0410 21:30:35.799074   14058 main.go:141] libmachine: Making call to close driver server
	I0410 21:30:35.799077   14058 main.go:141] libmachine: Successfully made call to close driver server
	I0410 21:30:35.799081   14058 main.go:141] libmachine: (addons-577364) Calling .Close
	I0410 21:30:35.799084   14058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 21:30:35.799092   14058 main.go:141] libmachine: Making call to close driver server
	I0410 21:30:35.799099   14058 main.go:141] libmachine: (addons-577364) Calling .Close
	I0410 21:30:35.799208   14058 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.285935459s)
	W0410 21:30:35.799242   14058 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0410 21:30:35.799261   14058 retry.go:31] will retry after 262.310032ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0410 21:30:35.799343   14058 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.270069489s)
	I0410 21:30:35.799359   14058 main.go:141] libmachine: Making call to close driver server
	I0410 21:30:35.799367   14058 main.go:141] libmachine: (addons-577364) Calling .Close
	I0410 21:30:35.799421   14058 main.go:141] libmachine: (addons-577364) DBG | Closing plugin on server side
	I0410 21:30:35.799441   14058 main.go:141] libmachine: Successfully made call to close driver server
	I0410 21:30:35.799447   14058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 21:30:35.799456   14058 main.go:141] libmachine: Making call to close driver server
	I0410 21:30:35.799463   14058 main.go:141] libmachine: (addons-577364) Calling .Close
	I0410 21:30:35.799596   14058 main.go:141] libmachine: (addons-577364) DBG | Closing plugin on server side
	I0410 21:30:35.799623   14058 main.go:141] libmachine: Successfully made call to close driver server
	I0410 21:30:35.799636   14058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 21:30:35.799681   14058 main.go:141] libmachine: (addons-577364) DBG | Closing plugin on server side
	I0410 21:30:35.799715   14058 main.go:141] libmachine: (addons-577364) DBG | Closing plugin on server side
	I0410 21:30:35.799734   14058 main.go:141] libmachine: Successfully made call to close driver server
	I0410 21:30:35.799740   14058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 21:30:35.799748   14058 main.go:141] libmachine: Making call to close driver server
	I0410 21:30:35.799755   14058 main.go:141] libmachine: (addons-577364) Calling .Close
	I0410 21:30:35.800082   14058 main.go:141] libmachine: (addons-577364) DBG | Closing plugin on server side
	I0410 21:30:35.800106   14058 main.go:141] libmachine: Successfully made call to close driver server
	I0410 21:30:35.800113   14058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 21:30:35.800131   14058 addons.go:470] Verifying addon registry=true in "addons-577364"
	I0410 21:30:35.802631   14058 out.go:177] * Verifying registry addon...
	I0410 21:30:35.800187   14058 main.go:141] libmachine: (addons-577364) DBG | Closing plugin on server side
	I0410 21:30:35.800213   14058 main.go:141] libmachine: Successfully made call to close driver server
	I0410 21:30:35.800233   14058 main.go:141] libmachine: Successfully made call to close driver server
	I0410 21:30:35.800257   14058 main.go:141] libmachine: (addons-577364) DBG | Closing plugin on server side
	I0410 21:30:35.800312   14058 main.go:141] libmachine: Successfully made call to close driver server
	I0410 21:30:35.800350   14058 main.go:141] libmachine: Successfully made call to close driver server
	I0410 21:30:35.801148   14058 main.go:141] libmachine: Successfully made call to close driver server
	I0410 21:30:35.801394   14058 main.go:141] libmachine: (addons-577364) DBG | Closing plugin on server side
	I0410 21:30:35.801432   14058 main.go:141] libmachine: Successfully made call to close driver server
	I0410 21:30:35.804143   14058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 21:30:35.804162   14058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 21:30:35.804170   14058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 21:30:35.804172   14058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 21:30:35.804152   14058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 21:30:35.804191   14058 main.go:141] libmachine: Making call to close driver server
	I0410 21:30:35.804195   14058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 21:30:35.804215   14058 main.go:141] libmachine: (addons-577364) Calling .Close
	I0410 21:30:35.804257   14058 main.go:141] libmachine: Making call to close driver server
	I0410 21:30:35.804174   14058 addons.go:470] Verifying addon metrics-server=true in "addons-577364"
	I0410 21:30:35.804272   14058 main.go:141] libmachine: (addons-577364) Calling .Close
	I0410 21:30:35.804180   14058 main.go:141] libmachine: Making call to close driver server
	I0410 21:30:35.804288   14058 main.go:141] libmachine: (addons-577364) Calling .Close
	I0410 21:30:35.804191   14058 addons.go:470] Verifying addon ingress=true in "addons-577364"
	I0410 21:30:35.806034   14058 out.go:177] * Verifying ingress addon...
	I0410 21:30:35.804533   14058 main.go:141] libmachine: Successfully made call to close driver server
	I0410 21:30:35.804560   14058 main.go:141] libmachine: (addons-577364) DBG | Closing plugin on server side
	I0410 21:30:35.804592   14058 main.go:141] libmachine: (addons-577364) DBG | Closing plugin on server side
	I0410 21:30:35.804610   14058 main.go:141] libmachine: Successfully made call to close driver server
	I0410 21:30:35.804715   14058 main.go:141] libmachine: (addons-577364) DBG | Closing plugin on server side
	I0410 21:30:35.804986   14058 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0410 21:30:35.805000   14058 main.go:141] libmachine: Successfully made call to close driver server
	I0410 21:30:35.806075   14058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 21:30:35.806091   14058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 21:30:35.806100   14058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 21:30:35.808652   14058 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-577364 service yakd-dashboard -n yakd-dashboard
	
	I0410 21:30:35.808235   14058 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0410 21:30:35.864625   14058 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0410 21:30:35.864652   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:35.864772   14058 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0410 21:30:35.864788   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:35.904784   14058 main.go:141] libmachine: Making call to close driver server
	I0410 21:30:35.904808   14058 main.go:141] libmachine: (addons-577364) Calling .Close
	I0410 21:30:35.905225   14058 main.go:141] libmachine: Successfully made call to close driver server
	I0410 21:30:35.905242   14058 main.go:141] libmachine: Making call to close connection to plugin binary
	W0410 21:30:35.905323   14058 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0410 21:30:35.927117   14058 main.go:141] libmachine: Making call to close driver server
	I0410 21:30:35.927136   14058 main.go:141] libmachine: (addons-577364) Calling .Close
	I0410 21:30:35.927559   14058 main.go:141] libmachine: (addons-577364) DBG | Closing plugin on server side
	I0410 21:30:35.927576   14058 main.go:141] libmachine: Successfully made call to close driver server
	I0410 21:30:35.927591   14058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 21:30:36.062404   14058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0410 21:30:36.190770   14058 pod_ready.go:102] pod "coredns-76f75df574-jg22x" in "kube-system" namespace has status "Ready":"False"
	I0410 21:30:36.324648   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:36.328906   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:36.870821   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:36.880366   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:37.012326   14058 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.717360373s)
	I0410 21:30:37.012377   14058 main.go:141] libmachine: Making call to close driver server
	I0410 21:30:37.012386   14058 main.go:141] libmachine: (addons-577364) Calling .Close
	I0410 21:30:37.012422   14058 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.332965713s)
	I0410 21:30:37.014236   14058 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0410 21:30:37.012708   14058 main.go:141] libmachine: Successfully made call to close driver server
	I0410 21:30:37.012715   14058 main.go:141] libmachine: (addons-577364) DBG | Closing plugin on server side
	I0410 21:30:37.015509   14058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 21:30:37.015529   14058 main.go:141] libmachine: Making call to close driver server
	I0410 21:30:37.016887   14058 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0410 21:30:37.015537   14058 main.go:141] libmachine: (addons-577364) Calling .Close
	I0410 21:30:37.018195   14058 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0410 21:30:37.018215   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0410 21:30:37.017145   14058 main.go:141] libmachine: Successfully made call to close driver server
	I0410 21:30:37.018258   14058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 21:30:37.018270   14058 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-577364"
	I0410 21:30:37.017167   14058 main.go:141] libmachine: (addons-577364) DBG | Closing plugin on server side
	I0410 21:30:37.019588   14058 out.go:177] * Verifying csi-hostpath-driver addon...
	I0410 21:30:37.021759   14058 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0410 21:30:37.038457   14058 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0410 21:30:37.038478   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:37.181333   14058 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0410 21:30:37.181398   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0410 21:30:37.256025   14058 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0410 21:30:37.256056   14058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0410 21:30:37.310593   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:37.314510   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:37.329466   14058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0410 21:30:37.610521   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:37.811768   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:37.818797   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:38.028042   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:38.314014   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:38.314227   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:38.527401   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:38.676796   14058 pod_ready.go:102] pod "coredns-76f75df574-jg22x" in "kube-system" namespace has status "Ready":"False"
	I0410 21:30:38.811663   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:38.818134   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:38.869047   14058 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.806593538s)
	I0410 21:30:38.869098   14058 main.go:141] libmachine: Making call to close driver server
	I0410 21:30:38.869116   14058 main.go:141] libmachine: (addons-577364) Calling .Close
	I0410 21:30:38.869367   14058 main.go:141] libmachine: (addons-577364) DBG | Closing plugin on server side
	I0410 21:30:38.869427   14058 main.go:141] libmachine: Successfully made call to close driver server
	I0410 21:30:38.869449   14058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 21:30:38.869467   14058 main.go:141] libmachine: Making call to close driver server
	I0410 21:30:38.869478   14058 main.go:141] libmachine: (addons-577364) Calling .Close
	I0410 21:30:38.869712   14058 main.go:141] libmachine: (addons-577364) DBG | Closing plugin on server side
	I0410 21:30:38.869757   14058 main.go:141] libmachine: Successfully made call to close driver server
	I0410 21:30:38.869774   14058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 21:30:39.029908   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:39.200427   14058 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.870929143s)
	I0410 21:30:39.200473   14058 main.go:141] libmachine: Making call to close driver server
	I0410 21:30:39.200496   14058 main.go:141] libmachine: (addons-577364) Calling .Close
	I0410 21:30:39.200798   14058 main.go:141] libmachine: (addons-577364) DBG | Closing plugin on server side
	I0410 21:30:39.200848   14058 main.go:141] libmachine: Successfully made call to close driver server
	I0410 21:30:39.200857   14058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 21:30:39.200871   14058 main.go:141] libmachine: Making call to close driver server
	I0410 21:30:39.200879   14058 main.go:141] libmachine: (addons-577364) Calling .Close
	I0410 21:30:39.201185   14058 main.go:141] libmachine: Successfully made call to close driver server
	I0410 21:30:39.201202   14058 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 21:30:39.202719   14058 addons.go:470] Verifying addon gcp-auth=true in "addons-577364"
	I0410 21:30:39.204832   14058 out.go:177] * Verifying gcp-auth addon...
	I0410 21:30:39.206890   14058 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0410 21:30:39.230645   14058 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0410 21:30:39.230675   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:39.340311   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:39.340569   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:39.533613   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:39.710920   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:39.812312   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:39.814430   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:40.027603   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:40.210702   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:40.311524   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:40.313681   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:40.533159   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:40.677874   14058 pod_ready.go:102] pod "coredns-76f75df574-jg22x" in "kube-system" namespace has status "Ready":"False"
	I0410 21:30:40.711141   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:40.811578   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:40.814564   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:41.030888   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:41.210712   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:41.310705   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:41.314329   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:41.527396   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:41.710860   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:41.811941   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:41.816306   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:42.027519   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:42.210818   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:42.311347   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:42.327372   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:42.527139   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:42.678187   14058 pod_ready.go:102] pod "coredns-76f75df574-jg22x" in "kube-system" namespace has status "Ready":"False"
	I0410 21:30:42.710158   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:42.812321   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:42.815097   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:43.064155   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:43.211225   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:43.312844   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:43.314460   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:43.533054   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:43.711858   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:43.813088   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:43.816800   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:44.037165   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:44.177731   14058 pod_ready.go:97] pod "coredns-76f75df574-jg22x" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-10 21:30:43 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-10 21:30:27 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-10 21:30:27 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-10 21:30:27 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-10 21:30:27 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.209 HostIPs:[{IP:192.168.39
.209}] PodIP: PodIPs:[] StartTime:2024-04-10 21:30:27 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-04-10 21:30:31 +0000 UTC,FinishedAt:2024-04-10 21:30:41 +0000 UTC,ContainerID:cri-o://e44c2b2587a04df720ee2dd780419e0634f7b62848ca24a5e4c6c23a01ccecba,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://e44c2b2587a04df720ee2dd780419e0634f7b62848ca24a5e4c6c23a01ccecba Started:0xc00347ca70 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0410 21:30:44.177759   14058 pod_ready.go:81] duration metric: took 12.007221005s for pod "coredns-76f75df574-jg22x" in "kube-system" namespace to be "Ready" ...
	E0410 21:30:44.177769   14058 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-76f75df574-jg22x" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-10 21:30:43 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-10 21:30:27 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-10 21:30:27 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-10 21:30:27 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-10 21:30:27 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.209 HostIPs:[{IP:192.168.39.209}] PodIP: PodIPs:[] StartTime:2024-04-10 21:30:27 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-04-10 21:30:31 +0000 UTC,FinishedAt:2024-04-10 21:30:41 +0000 UTC,ContainerID:cri-o://e44c2b2587a04df720ee2dd780419e0634f7b62848ca24a5e4c6c23a01ccecba,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://e44c2b2587a04df720ee2dd780419e0634f7b62848ca24a5e4c6c23a01ccecba Started:0xc00347ca70 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0410 21:30:44.177776   14058 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-577364" in "kube-system" namespace to be "Ready" ...
	I0410 21:30:44.185431   14058 pod_ready.go:92] pod "etcd-addons-577364" in "kube-system" namespace has status "Ready":"True"
	I0410 21:30:44.185455   14058 pod_ready.go:81] duration metric: took 7.672219ms for pod "etcd-addons-577364" in "kube-system" namespace to be "Ready" ...
	I0410 21:30:44.185464   14058 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-577364" in "kube-system" namespace to be "Ready" ...
	I0410 21:30:44.191948   14058 pod_ready.go:92] pod "kube-apiserver-addons-577364" in "kube-system" namespace has status "Ready":"True"
	I0410 21:30:44.191975   14058 pod_ready.go:81] duration metric: took 6.498289ms for pod "kube-apiserver-addons-577364" in "kube-system" namespace to be "Ready" ...
	I0410 21:30:44.191986   14058 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-577364" in "kube-system" namespace to be "Ready" ...
	I0410 21:30:44.197701   14058 pod_ready.go:92] pod "kube-controller-manager-addons-577364" in "kube-system" namespace has status "Ready":"True"
	I0410 21:30:44.197719   14058 pod_ready.go:81] duration metric: took 5.725391ms for pod "kube-controller-manager-addons-577364" in "kube-system" namespace to be "Ready" ...
	I0410 21:30:44.197730   14058 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6gx5s" in "kube-system" namespace to be "Ready" ...
	I0410 21:30:44.206167   14058 pod_ready.go:92] pod "kube-proxy-6gx5s" in "kube-system" namespace has status "Ready":"True"
	I0410 21:30:44.206183   14058 pod_ready.go:81] duration metric: took 8.446694ms for pod "kube-proxy-6gx5s" in "kube-system" namespace to be "Ready" ...
	I0410 21:30:44.206194   14058 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-577364" in "kube-system" namespace to be "Ready" ...
	I0410 21:30:44.211004   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:44.311671   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:44.315145   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:44.534521   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:44.575144   14058 pod_ready.go:92] pod "kube-scheduler-addons-577364" in "kube-system" namespace has status "Ready":"True"
	I0410 21:30:44.575171   14058 pod_ready.go:81] duration metric: took 368.969345ms for pod "kube-scheduler-addons-577364" in "kube-system" namespace to be "Ready" ...
	I0410 21:30:44.575178   14058 pod_ready.go:38] duration metric: took 17.999333975s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 21:30:44.575192   14058 api_server.go:52] waiting for apiserver process to appear ...
	I0410 21:30:44.575240   14058 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 21:30:44.593849   14058 api_server.go:72] duration metric: took 18.535475602s to wait for apiserver process to appear ...
	I0410 21:30:44.593880   14058 api_server.go:88] waiting for apiserver healthz status ...
	I0410 21:30:44.593908   14058 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I0410 21:30:44.597820   14058 api_server.go:279] https://192.168.39.209:8443/healthz returned 200:
	ok
	I0410 21:30:44.598903   14058 api_server.go:141] control plane version: v1.29.3
	I0410 21:30:44.598922   14058 api_server.go:131] duration metric: took 5.034648ms to wait for apiserver health ...
	I0410 21:30:44.598932   14058 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 21:30:44.710699   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:44.782351   14058 system_pods.go:59] 18 kube-system pods found
	I0410 21:30:44.782381   14058 system_pods.go:61] "coredns-76f75df574-5whqs" [e02909ca-d926-479c-9994-d31142224b51] Running
	I0410 21:30:44.782388   14058 system_pods.go:61] "csi-hostpath-attacher-0" [351cc938-dc53-4f88-be8d-145ee0af8a56] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0410 21:30:44.782394   14058 system_pods.go:61] "csi-hostpath-resizer-0" [063bb3ac-1ff2-458e-a667-3dc259cd55d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0410 21:30:44.782402   14058 system_pods.go:61] "csi-hostpathplugin-lmmdh" [e3fc2d72-68cd-4a4a-a959-249492ed517d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0410 21:30:44.782406   14058 system_pods.go:61] "etcd-addons-577364" [aa11c0e8-03e2-4362-9938-902c478092f5] Running
	I0410 21:30:44.782410   14058 system_pods.go:61] "kube-apiserver-addons-577364" [aebe7fa1-b8ea-4a91-9e6c-b5ed0542c05f] Running
	I0410 21:30:44.782413   14058 system_pods.go:61] "kube-controller-manager-addons-577364" [b10ca31d-4a49-4cde-9b9c-858d079047f4] Running
	I0410 21:30:44.782416   14058 system_pods.go:61] "kube-ingress-dns-minikube" [617933ce-8840-4242-bbed-1e88480de282] Running
	I0410 21:30:44.782419   14058 system_pods.go:61] "kube-proxy-6gx5s" [55cff16f-64e2-4c97-ad79-9eb5f6fee0cc] Running
	I0410 21:30:44.782422   14058 system_pods.go:61] "kube-scheduler-addons-577364" [6b9e7f4d-3b4a-4cdb-9586-e5a249b02ae8] Running
	I0410 21:30:44.782427   14058 system_pods.go:61] "metrics-server-75d6c48ddd-bd56m" [cb0f0dc5-19c6-4cec-a7f5-82bd11fc7537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 21:30:44.782433   14058 system_pods.go:61] "nvidia-device-plugin-daemonset-s9dvf" [a5317961-998f-441b-aa29-8cd21367e96c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0410 21:30:44.782439   14058 system_pods.go:61] "registry-7rzv5" [d1bcce9f-b2cd-45a4-a0c8-cff2fc3184d2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0410 21:30:44.782450   14058 system_pods.go:61] "registry-proxy-lztl5" [c1d5454c-bd26-48f1-acdd-90e02e04ff42] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0410 21:30:44.782461   14058 system_pods.go:61] "snapshot-controller-58dbcc7b99-8hlxw" [0b561d3f-007a-4293-b453-e3f2b38b2970] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0410 21:30:44.782467   14058 system_pods.go:61] "snapshot-controller-58dbcc7b99-r6nmn" [5dee4f29-054d-4e6d-adbf-90411e61ec93] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0410 21:30:44.782471   14058 system_pods.go:61] "storage-provisioner" [8e7d65e1-27c4-4027-ac9d-d3ccfd94776e] Running
	I0410 21:30:44.782476   14058 system_pods.go:61] "tiller-deploy-7b677967b9-mrf5g" [eda44e0a-72f7-41d3-a030-7ecf1007bab9] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0410 21:30:44.782483   14058 system_pods.go:74] duration metric: took 183.54466ms to wait for pod list to return data ...
	I0410 21:30:44.782492   14058 default_sa.go:34] waiting for default service account to be created ...
	I0410 21:30:44.810604   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:44.813697   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:44.975062   14058 default_sa.go:45] found service account: "default"
	I0410 21:30:44.975087   14058 default_sa.go:55] duration metric: took 192.587893ms for default service account to be created ...
	I0410 21:30:44.975096   14058 system_pods.go:116] waiting for k8s-apps to be running ...
	I0410 21:30:45.030244   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:45.181705   14058 system_pods.go:86] 18 kube-system pods found
	I0410 21:30:45.181732   14058 system_pods.go:89] "coredns-76f75df574-5whqs" [e02909ca-d926-479c-9994-d31142224b51] Running
	I0410 21:30:45.181739   14058 system_pods.go:89] "csi-hostpath-attacher-0" [351cc938-dc53-4f88-be8d-145ee0af8a56] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0410 21:30:45.181747   14058 system_pods.go:89] "csi-hostpath-resizer-0" [063bb3ac-1ff2-458e-a667-3dc259cd55d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0410 21:30:45.181756   14058 system_pods.go:89] "csi-hostpathplugin-lmmdh" [e3fc2d72-68cd-4a4a-a959-249492ed517d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0410 21:30:45.181760   14058 system_pods.go:89] "etcd-addons-577364" [aa11c0e8-03e2-4362-9938-902c478092f5] Running
	I0410 21:30:45.181764   14058 system_pods.go:89] "kube-apiserver-addons-577364" [aebe7fa1-b8ea-4a91-9e6c-b5ed0542c05f] Running
	I0410 21:30:45.181769   14058 system_pods.go:89] "kube-controller-manager-addons-577364" [b10ca31d-4a49-4cde-9b9c-858d079047f4] Running
	I0410 21:30:45.181774   14058 system_pods.go:89] "kube-ingress-dns-minikube" [617933ce-8840-4242-bbed-1e88480de282] Running
	I0410 21:30:45.181778   14058 system_pods.go:89] "kube-proxy-6gx5s" [55cff16f-64e2-4c97-ad79-9eb5f6fee0cc] Running
	I0410 21:30:45.181781   14058 system_pods.go:89] "kube-scheduler-addons-577364" [6b9e7f4d-3b4a-4cdb-9586-e5a249b02ae8] Running
	I0410 21:30:45.181787   14058 system_pods.go:89] "metrics-server-75d6c48ddd-bd56m" [cb0f0dc5-19c6-4cec-a7f5-82bd11fc7537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 21:30:45.181795   14058 system_pods.go:89] "nvidia-device-plugin-daemonset-s9dvf" [a5317961-998f-441b-aa29-8cd21367e96c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0410 21:30:45.181804   14058 system_pods.go:89] "registry-7rzv5" [d1bcce9f-b2cd-45a4-a0c8-cff2fc3184d2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0410 21:30:45.181810   14058 system_pods.go:89] "registry-proxy-lztl5" [c1d5454c-bd26-48f1-acdd-90e02e04ff42] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0410 21:30:45.181818   14058 system_pods.go:89] "snapshot-controller-58dbcc7b99-8hlxw" [0b561d3f-007a-4293-b453-e3f2b38b2970] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0410 21:30:45.181825   14058 system_pods.go:89] "snapshot-controller-58dbcc7b99-r6nmn" [5dee4f29-054d-4e6d-adbf-90411e61ec93] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0410 21:30:45.181829   14058 system_pods.go:89] "storage-provisioner" [8e7d65e1-27c4-4027-ac9d-d3ccfd94776e] Running
	I0410 21:30:45.181835   14058 system_pods.go:89] "tiller-deploy-7b677967b9-mrf5g" [eda44e0a-72f7-41d3-a030-7ecf1007bab9] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0410 21:30:45.181841   14058 system_pods.go:126] duration metric: took 206.741049ms to wait for k8s-apps to be running ...
	I0410 21:30:45.181853   14058 system_svc.go:44] waiting for kubelet service to be running ....
	I0410 21:30:45.181894   14058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 21:30:45.197910   14058 system_svc.go:56] duration metric: took 16.046492ms WaitForService to wait for kubelet
	I0410 21:30:45.197944   14058 kubeadm.go:576] duration metric: took 19.139575219s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 21:30:45.197969   14058 node_conditions.go:102] verifying NodePressure condition ...
	I0410 21:30:45.210010   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:45.312242   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:45.315994   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:45.375863   14058 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 21:30:45.375892   14058 node_conditions.go:123] node cpu capacity is 2
	I0410 21:30:45.375903   14058 node_conditions.go:105] duration metric: took 177.929755ms to run NodePressure ...
	I0410 21:30:45.375914   14058 start.go:240] waiting for startup goroutines ...
	I0410 21:30:45.528583   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:45.710604   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:45.811060   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:45.813764   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:46.027708   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:46.211239   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:46.311569   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:46.314287   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:46.528176   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:46.711304   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:46.811938   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:46.814630   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:47.027745   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:47.210937   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:47.312423   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:47.315494   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:47.530976   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:47.713870   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:47.812325   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:47.815301   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:48.027158   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:48.211356   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:48.310248   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:48.314341   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:48.527027   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:48.711240   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:48.811876   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:48.814104   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:49.028769   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:49.210437   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:49.317268   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:49.320508   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:49.528698   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:49.710528   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:49.810768   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:49.814081   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:50.028611   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:50.211087   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:50.311633   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:50.317473   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:50.528900   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:50.710957   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:50.811555   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:50.814549   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:51.031391   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:51.326018   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:51.326543   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:51.331945   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:51.527725   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:51.710817   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:51.811404   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:51.814261   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:52.026941   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:52.211228   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:52.311382   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:52.314351   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:52.528841   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:52.712310   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:52.813929   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:52.814621   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:53.031079   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:53.285017   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:53.631844   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:53.633705   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:53.635136   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:53.711596   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:53.810599   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:53.814054   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:54.027546   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:54.210307   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:54.310989   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:54.313800   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:54.528037   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:54.711589   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:54.811906   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:54.814901   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:55.027212   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:55.211463   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:55.310871   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:55.313740   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:55.529477   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:55.711057   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:55.812328   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:55.816557   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:56.032791   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:56.212124   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:56.311074   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:56.314978   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:56.529431   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:56.711434   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:56.811463   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:56.817694   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:57.027216   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:57.212720   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:57.311332   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:57.317449   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:57.527806   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:57.710922   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:57.818725   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:57.839376   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:58.029393   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:58.214640   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:58.310866   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:58.314379   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:58.528977   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:58.711462   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:58.812321   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:58.815014   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:59.031103   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:59.210558   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:59.313016   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:59.314884   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:30:59.533600   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:30:59.710904   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:30:59.811554   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:30:59.814641   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:00.028814   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:00.210828   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:00.313037   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:31:00.314836   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:00.529304   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:00.710974   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:00.813464   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:31:00.815783   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:01.027784   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:01.210327   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:01.311837   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:31:01.315180   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:01.527795   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:01.710957   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:01.813182   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:31:01.816467   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:02.034532   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:02.210908   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:02.311067   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:31:02.315023   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:02.527946   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:02.710703   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:02.811181   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:31:02.814120   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:03.028609   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:03.210949   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:03.311229   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:31:03.317779   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:03.535453   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:03.711556   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:03.810690   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:31:03.814991   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:04.028347   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:04.212306   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:04.310665   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:31:04.313878   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:04.527751   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:04.711576   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:04.810769   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:31:04.813661   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:05.027436   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:05.212933   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:05.312490   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:31:05.314874   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:05.528094   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:05.711294   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:05.811799   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:31:05.815821   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:06.027668   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:06.211198   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:06.310778   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:31:06.314083   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:06.529585   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:06.710697   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:06.812029   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:31:06.814393   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:07.027306   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:07.211597   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:07.311370   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:31:07.314093   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:07.527686   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:07.710678   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:07.811160   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:31:07.813655   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:08.028636   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:08.211147   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:08.310927   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:31:08.313808   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:08.527680   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:08.711294   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:08.810948   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:31:08.814489   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:09.027650   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:09.211472   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:09.311246   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:31:09.314217   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:09.528277   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:09.710540   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:09.811953   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0410 21:31:09.814772   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:10.029712   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:10.212314   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:10.312601   14058 kapi.go:107] duration metric: took 34.507616617s to wait for kubernetes.io/minikube-addons=registry ...
	I0410 21:31:10.314894   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:10.529439   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:10.711724   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:10.814986   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:11.027554   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:11.211320   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:11.315530   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:11.528034   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:11.711486   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:11.814740   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:12.028228   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:12.211185   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:12.315417   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:12.530709   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:12.711256   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:12.816952   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:13.028773   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:13.211358   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:13.317449   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:13.529125   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:13.717842   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:13.816580   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:14.028060   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:14.331212   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:14.341734   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:14.528255   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:14.710585   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:14.816038   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:15.030307   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:15.210972   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:15.315322   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:15.528384   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:15.711119   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:15.819542   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:16.028017   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:16.217009   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:16.315902   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:16.527846   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:16.711288   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:16.816475   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:17.029237   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:17.211226   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:17.315447   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:17.532133   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:17.711804   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:17.821598   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:18.028004   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:18.211696   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:18.315719   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:18.798572   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:18.803622   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:18.815325   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:19.035612   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:19.211598   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:19.317777   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:19.527164   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:19.711940   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:19.815497   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:20.028487   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:20.210946   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:20.315280   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:20.528570   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:20.711414   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:20.817582   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:21.030298   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:21.211287   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:21.315156   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:21.527979   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:21.710981   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:21.814716   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:22.028247   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:22.211263   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:22.315563   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:22.529919   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:22.710962   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:22.815019   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:23.028028   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:23.210589   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:23.314265   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:23.530501   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:23.711662   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:23.816444   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:24.027352   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:24.217442   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:24.314968   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:24.530382   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:24.711094   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:24.816038   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:25.027288   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:25.211177   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:25.318512   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:25.530368   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:25.711627   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:25.814860   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:26.027712   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:26.216155   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:26.315220   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:26.528769   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:26.711519   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:26.814376   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:27.027958   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:27.215089   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:27.315506   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:27.527743   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:27.711149   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:27.815431   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:28.027788   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:28.211000   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:28.316034   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:28.528067   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:28.711199   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:28.815604   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:29.028509   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:29.211733   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:29.314829   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:29.528519   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:29.711881   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:29.816274   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:30.032266   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:30.211523   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:30.314986   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:30.528549   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:30.711623   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:30.815078   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:31.031615   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:31.211476   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:31.314888   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:31.530595   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:31.710220   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:31.815936   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:32.027735   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:32.210732   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:32.314843   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:32.527714   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:32.710887   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:32.818302   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:33.027345   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:33.211034   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:33.315037   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:33.719021   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:33.723365   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:33.815425   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:34.042404   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:34.212127   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:34.315591   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:34.534680   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:34.710777   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:34.818077   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:35.028429   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:35.212132   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:35.315120   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:35.528075   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:35.710915   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:35.815171   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:36.028226   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:36.211026   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:36.317424   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:36.530479   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:36.711706   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:36.814538   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:37.027396   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:37.211427   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:37.314899   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:37.530742   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:37.710645   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:37.815565   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:38.027938   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:38.211705   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:38.320577   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:38.527974   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:38.711627   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:38.831886   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:39.027085   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:39.211868   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:39.316765   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:39.540307   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:39.711887   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:39.816696   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:40.028287   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:40.212009   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:40.319178   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:40.539719   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:40.713693   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:40.815950   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:41.028074   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:41.210297   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:41.317225   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:41.528376   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:41.711831   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:41.815465   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:42.036166   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:42.211393   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:42.316448   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:42.527800   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:42.711260   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:42.818524   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:43.027335   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:43.218919   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:43.315561   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:43.527788   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:43.711672   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:43.815179   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:44.029497   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:44.211465   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:44.316622   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:44.527871   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:44.712154   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:44.815665   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:45.503922   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:45.504255   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:45.504261   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:45.529187   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:45.712008   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:45.815081   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:46.028366   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:46.210700   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:46.316548   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:46.527309   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:46.711170   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:46.815380   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:47.027147   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:47.211063   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:47.315097   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:47.528706   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:47.713824   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:47.815517   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:48.028716   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:48.491905   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:48.492142   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:48.530027   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:48.711738   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:48.815018   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:49.028838   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:49.211230   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:49.315797   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:49.527825   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:49.711665   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:49.815207   14058 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0410 21:31:50.028633   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:50.217174   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:50.316465   14058 kapi.go:107] duration metric: took 1m14.508224423s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0410 21:31:50.528984   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:50.711663   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:51.032254   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:51.211516   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:51.527620   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:51.714476   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:52.028379   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:52.211714   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:52.528201   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:52.711986   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:53.028047   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:53.212131   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:53.527672   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:53.710913   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:54.027034   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:54.224738   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0410 21:31:54.529055   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:54.716468   14058 kapi.go:107] duration metric: took 1m15.509575944s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0410 21:31:54.718298   14058 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-577364 cluster.
	I0410 21:31:54.719760   14058 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0410 21:31:54.721213   14058 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0410 21:31:55.027194   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:55.527219   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:56.027301   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:56.778111   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:57.028964   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:57.527376   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:58.028196   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:58.527085   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:59.027971   14058 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0410 21:31:59.534041   14058 kapi.go:107] duration metric: took 1m22.512276845s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0410 21:31:59.536137   14058 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, cloud-spanner, storage-provisioner, metrics-server, helm-tiller, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0410 21:31:59.537699   14058 addons.go:505] duration metric: took 1m33.479290238s for enable addons: enabled=[ingress-dns nvidia-device-plugin cloud-spanner storage-provisioner metrics-server helm-tiller inspektor-gadget yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0410 21:31:59.537742   14058 start.go:245] waiting for cluster config update ...
	I0410 21:31:59.537762   14058 start.go:254] writing updated cluster config ...
	I0410 21:31:59.537989   14058 ssh_runner.go:195] Run: rm -f paused
	I0410 21:31:59.598026   14058 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0410 21:31:59.600012   14058 out.go:177] * Done! kubectl is now configured to use "addons-577364" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 10 21:34:53 addons-577364 crio[682]: time="2024-04-10 21:34:53.972173668Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712784893972145481,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:573323,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f56ed3d5-8164-45c5-9086-d7735feaec86 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 21:34:53 addons-577364 crio[682]: time="2024-04-10 21:34:53.972985573Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b7b91818-afdd-4e45-bcd6-b8d2bd04540d name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:34:53 addons-577364 crio[682]: time="2024-04-10 21:34:53.973064930Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b7b91818-afdd-4e45-bcd6-b8d2bd04540d name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:34:53 addons-577364 crio[682]: time="2024-04-10 21:34:53.973371083Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b464feb4f65676020fbac1f2b293b76db1936b532b894f5f05e3a7feb196a98,PodSandboxId:14da1c348b68e8506114868089c13e481453f6b5ab87344ddbbdea1de11a2fbe,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1712784886938372115,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-bhxbc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b8dbc81-dc63-4d0a-b174-bbb4874bf564,},Annotations:map[string]string{io.kubernetes.container.hash: 594f3cf3,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3935dae64c9d299733cf9ac48c185da9697ee610393c6fee60e9240b9d5609cc,PodSandboxId:2cc666785169f394cb57b3aa535a6a51a027daaed963af0b88fa55c7c55ca67e,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1712784761947941797,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5b77dbd7c4-k9nwt,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 03c2aa31-a311-421a-bb11-2797c4bb051f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 36b2d8aa,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a82907ea877fc9f3afc8844841f9b2c38a86e5fd89715eba5b363490401c61,PodSandboxId:cbaa5633be2fca005a75f962c7995131efbcbb8b135949e88dbe03cd19290dcd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1712784743696407518,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: d06609ad-b7cc-4be0-8572-437ef14d80dd,},Annotations:map[string]string{io.kubernetes.container.hash: 249ac266,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84ae8d46a47a55de65ebde03fd49787c27f2e10ec269d741c7d3d1c36d6e8d1e,PodSandboxId:9ab287ad6a94f9ae36e7d3e9f9f1d98be3bafa49867a3ffe6f5b7ee13f01a7fc,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1712784713674466462,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-7d69788767-9dhhs,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0b164b8c-7871-4b0d-b881-5ed84e13a15a,},Annotations:map[string]string{io.kubernetes.container.hash: 252a6ef3,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20de0805033d96fa7f5b70c70da2b12463214a92b0568804ef3ae6337f8649e2,PodSandboxId:2f30d22b6c4ab01a42e96959ef6e79a77901e545666dea18baf1b5e8806317b7,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1712784707493532151,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tkmz9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ad555aff-2ec2-4eb1-8cd3-555e4f6f24a7,},Annotations:map[string]string{io.kubernetes.container.hash: 30d4c80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e4794e7aada0a7d56a5e7352c1d6865dcc416124dca35ed607ce10a47a38b0c,PodSandboxId:cea35032aa829e502692bdc93d0d0bc9c4433676218f65254f837695a1459d8e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:
1712784693947733492,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-tx59g,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c59430b9-57f9-4224-b2c7-b6c596913276,},Annotations:map[string]string{io.kubernetes.container.hash: 352b4336,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:572817371ac952327162e642a5463b35e670a44e91aee7b494c7bdb1345b52ea,PodSandboxId:02adec0661fcaedacfc26c1571ed03d84d71ca80b9cc57a27365ea62105fa3aa,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,
CreatedAt:1712784689867515142,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-sznp7,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 136b30cb-878d-43ec-9fe7-77f52732f659,},Annotations:map[string]string{io.kubernetes.container.hash: 759b8f49,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e493ed52efe76f855ff53412722cac7fb434bb8578d8fc99eeab1f8fff270b19,PodSandboxId:a9aeaa91dfdd0eb51eb2a6680b90452ea3c52c5f524c6fcd67e51f786cf06b71,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1712784678903424423,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-bg294,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e744abe7-fbc6-4fb3-81e4-e71a1456d009,},Annotations:map[string]string{io.kubernetes.container.hash: efe5f7a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72cde0b3ae0bccfb3f13573eba3ccc04f1dc87f130ba80cb6121feaac314f80f,PodSandboxId:23fe1207e104726417e1b1556b560cc50b5b6b9338ebfe36dd592f0e3cadc3de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map
[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712784632581388687,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7d65e1-27c4-4027-ac9d-d3ccfd94776e,},Annotations:map[string]string{io.kubernetes.container.hash: 3a733b23,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddb35ca17bb07ad24682875b2fd28d1d5da31ff44048b35a4fcb85db69ba98b9,PodSandboxId:10145100e1819108e3ed5954c8eb9f2af0f0cede4b36918d90d8fc542844a2c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712784630191757408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5whqs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e02909ca-d926-479c-9994-d31142224b51,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8e53ea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ce81206bf21bf1cd12a34d2d2ba3fb437b15c0c608dca9b962d453ec9d8a50,PodSandboxId:d420d5dfc0ce02b794f4a29f82fd72e0e0bd1e63309a1f0a6e
9dd1951d5fb9e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712784626711649324,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gx5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55cff16f-64e2-4c97-ad79-9eb5f6fee0cc,},Annotations:map[string]string{io.kubernetes.container.hash: d1d91233,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b79823ce30e3a9c5a750a98d7f41540b2f13f3946b8a809a43aa3c4f78c262c7,PodSandboxId:048bf5e73c5307b923fed9f90bc8cd0ceadf8e45f016978935baddb3f42dfcb5,Metadata:&Containe
rMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712784606596095643,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-577364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec3082f7957ec74ca05ca14341794fb,},Annotations:map[string]string{io.kubernetes.container.hash: ce72760a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1db1f1e91e036ebc507da67d76beebb0a7f95fb578807d53546e732d89930655,PodSandboxId:96debf077442716ab2ee5c90db60be964481828fd05001732677f051b6618e9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image
:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712784606586336069,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-577364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e07d5d42aa952f58acaf881612511c87,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff191f808df8733ede34b30e98a0c7c983ce447b61c42112c71c0a24fd03a359,PodSandboxId:dd39e05b7b22923b7488539cb1f66aec09bd3e2cac6d1663810c3d257b146c2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image
:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712784606571600946,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-577364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e2add46714c7a6ceb25423752a4f3f5,},Annotations:map[string]string{io.kubernetes.container.hash: 4f957258,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9116f2b3094787cfb8cfc2b56f700c3f7be759d74e6590811f8a236dddfc3e9,PodSandboxId:1b9d1664ec824d48d70852382643e18dfef4929eb6fc623897fdd4406c24dc4f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25
da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712784606483497308,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-577364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5c8e1de0f344c61bf9108b27e678629,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b7b91818-afdd-4e45-bcd6-b8d2bd04540d name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:34:54 addons-577364 crio[682]: time="2024-04-10 21:34:54.012745074Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5fd191c4-1667-4735-88e5-f00a0882d596 name=/runtime.v1.RuntimeService/Version
	Apr 10 21:34:54 addons-577364 crio[682]: time="2024-04-10 21:34:54.013003456Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5fd191c4-1667-4735-88e5-f00a0882d596 name=/runtime.v1.RuntimeService/Version
	Apr 10 21:34:54 addons-577364 crio[682]: time="2024-04-10 21:34:54.014621891Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cdde3d04-6816-4624-82d3-a67a73f30a81 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 21:34:54 addons-577364 crio[682]: time="2024-04-10 21:34:54.015959080Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712784894015931801,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:573323,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cdde3d04-6816-4624-82d3-a67a73f30a81 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 21:34:54 addons-577364 crio[682]: time="2024-04-10 21:34:54.016597173Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8612d662-bdd5-4c32-8636-f208ce547d44 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:34:54 addons-577364 crio[682]: time="2024-04-10 21:34:54.016674663Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8612d662-bdd5-4c32-8636-f208ce547d44 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:34:54 addons-577364 crio[682]: time="2024-04-10 21:34:54.017049389Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b464feb4f65676020fbac1f2b293b76db1936b532b894f5f05e3a7feb196a98,PodSandboxId:14da1c348b68e8506114868089c13e481453f6b5ab87344ddbbdea1de11a2fbe,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1712784886938372115,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-bhxbc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b8dbc81-dc63-4d0a-b174-bbb4874bf564,},Annotations:map[string]string{io.kubernetes.container.hash: 594f3cf3,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3935dae64c9d299733cf9ac48c185da9697ee610393c6fee60e9240b9d5609cc,PodSandboxId:2cc666785169f394cb57b3aa535a6a51a027daaed963af0b88fa55c7c55ca67e,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1712784761947941797,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5b77dbd7c4-k9nwt,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 03c2aa31-a311-421a-bb11-2797c4bb051f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 36b2d8aa,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a82907ea877fc9f3afc8844841f9b2c38a86e5fd89715eba5b363490401c61,PodSandboxId:cbaa5633be2fca005a75f962c7995131efbcbb8b135949e88dbe03cd19290dcd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1712784743696407518,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: d06609ad-b7cc-4be0-8572-437ef14d80dd,},Annotations:map[string]string{io.kubernetes.container.hash: 249ac266,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84ae8d46a47a55de65ebde03fd49787c27f2e10ec269d741c7d3d1c36d6e8d1e,PodSandboxId:9ab287ad6a94f9ae36e7d3e9f9f1d98be3bafa49867a3ffe6f5b7ee13f01a7fc,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1712784713674466462,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-7d69788767-9dhhs,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0b164b8c-7871-4b0d-b881-5ed84e13a15a,},Annotations:map[string]string{io.kubernetes.container.hash: 252a6ef3,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20de0805033d96fa7f5b70c70da2b12463214a92b0568804ef3ae6337f8649e2,PodSandboxId:2f30d22b6c4ab01a42e96959ef6e79a77901e545666dea18baf1b5e8806317b7,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1712784707493532151,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tkmz9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ad555aff-2ec2-4eb1-8cd3-555e4f6f24a7,},Annotations:map[string]string{io.kubernetes.container.hash: 30d4c80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e4794e7aada0a7d56a5e7352c1d6865dcc416124dca35ed607ce10a47a38b0c,PodSandboxId:cea35032aa829e502692bdc93d0d0bc9c4433676218f65254f837695a1459d8e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:
1712784693947733492,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-tx59g,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c59430b9-57f9-4224-b2c7-b6c596913276,},Annotations:map[string]string{io.kubernetes.container.hash: 352b4336,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:572817371ac952327162e642a5463b35e670a44e91aee7b494c7bdb1345b52ea,PodSandboxId:02adec0661fcaedacfc26c1571ed03d84d71ca80b9cc57a27365ea62105fa3aa,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,
CreatedAt:1712784689867515142,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-sznp7,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 136b30cb-878d-43ec-9fe7-77f52732f659,},Annotations:map[string]string{io.kubernetes.container.hash: 759b8f49,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e493ed52efe76f855ff53412722cac7fb434bb8578d8fc99eeab1f8fff270b19,PodSandboxId:a9aeaa91dfdd0eb51eb2a6680b90452ea3c52c5f524c6fcd67e51f786cf06b71,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1712784678903424423,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-bg294,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e744abe7-fbc6-4fb3-81e4-e71a1456d009,},Annotations:map[string]string{io.kubernetes.container.hash: efe5f7a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72cde0b3ae0bccfb3f13573eba3ccc04f1dc87f130ba80cb6121feaac314f80f,PodSandboxId:23fe1207e104726417e1b1556b560cc50b5b6b9338ebfe36dd592f0e3cadc3de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map
[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712784632581388687,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7d65e1-27c4-4027-ac9d-d3ccfd94776e,},Annotations:map[string]string{io.kubernetes.container.hash: 3a733b23,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddb35ca17bb07ad24682875b2fd28d1d5da31ff44048b35a4fcb85db69ba98b9,PodSandboxId:10145100e1819108e3ed5954c8eb9f2af0f0cede4b36918d90d8fc542844a2c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712784630191757408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5whqs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e02909ca-d926-479c-9994-d31142224b51,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8e53ea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ce81206bf21bf1cd12a34d2d2ba3fb437b15c0c608dca9b962d453ec9d8a50,PodSandboxId:d420d5dfc0ce02b794f4a29f82fd72e0e0bd1e63309a1f0a6e
9dd1951d5fb9e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712784626711649324,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gx5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55cff16f-64e2-4c97-ad79-9eb5f6fee0cc,},Annotations:map[string]string{io.kubernetes.container.hash: d1d91233,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b79823ce30e3a9c5a750a98d7f41540b2f13f3946b8a809a43aa3c4f78c262c7,PodSandboxId:048bf5e73c5307b923fed9f90bc8cd0ceadf8e45f016978935baddb3f42dfcb5,Metadata:&Containe
rMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712784606596095643,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-577364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec3082f7957ec74ca05ca14341794fb,},Annotations:map[string]string{io.kubernetes.container.hash: ce72760a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1db1f1e91e036ebc507da67d76beebb0a7f95fb578807d53546e732d89930655,PodSandboxId:96debf077442716ab2ee5c90db60be964481828fd05001732677f051b6618e9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image
:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712784606586336069,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-577364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e07d5d42aa952f58acaf881612511c87,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff191f808df8733ede34b30e98a0c7c983ce447b61c42112c71c0a24fd03a359,PodSandboxId:dd39e05b7b22923b7488539cb1f66aec09bd3e2cac6d1663810c3d257b146c2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image
:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712784606571600946,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-577364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e2add46714c7a6ceb25423752a4f3f5,},Annotations:map[string]string{io.kubernetes.container.hash: 4f957258,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9116f2b3094787cfb8cfc2b56f700c3f7be759d74e6590811f8a236dddfc3e9,PodSandboxId:1b9d1664ec824d48d70852382643e18dfef4929eb6fc623897fdd4406c24dc4f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25
da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712784606483497308,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-577364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5c8e1de0f344c61bf9108b27e678629,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8612d662-bdd5-4c32-8636-f208ce547d44 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:34:54 addons-577364 crio[682]: time="2024-04-10 21:34:54.056512988Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0403fb2e-42c8-4dcc-908f-375ec46d7c01 name=/runtime.v1.RuntimeService/Version
	Apr 10 21:34:54 addons-577364 crio[682]: time="2024-04-10 21:34:54.056616276Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0403fb2e-42c8-4dcc-908f-375ec46d7c01 name=/runtime.v1.RuntimeService/Version
	Apr 10 21:34:54 addons-577364 crio[682]: time="2024-04-10 21:34:54.057977393Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e3d10aa-f3c7-4c41-a32e-e1071bddb9c2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 21:34:54 addons-577364 crio[682]: time="2024-04-10 21:34:54.059414149Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712784894059385406,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:573323,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e3d10aa-f3c7-4c41-a32e-e1071bddb9c2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 21:34:54 addons-577364 crio[682]: time="2024-04-10 21:34:54.060107064Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=08c42e2f-a929-48f9-8c24-78cf79584783 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:34:54 addons-577364 crio[682]: time="2024-04-10 21:34:54.060184907Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=08c42e2f-a929-48f9-8c24-78cf79584783 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:34:54 addons-577364 crio[682]: time="2024-04-10 21:34:54.060688816Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b464feb4f65676020fbac1f2b293b76db1936b532b894f5f05e3a7feb196a98,PodSandboxId:14da1c348b68e8506114868089c13e481453f6b5ab87344ddbbdea1de11a2fbe,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1712784886938372115,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-bhxbc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b8dbc81-dc63-4d0a-b174-bbb4874bf564,},Annotations:map[string]string{io.kubernetes.container.hash: 594f3cf3,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3935dae64c9d299733cf9ac48c185da9697ee610393c6fee60e9240b9d5609cc,PodSandboxId:2cc666785169f394cb57b3aa535a6a51a027daaed963af0b88fa55c7c55ca67e,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1712784761947941797,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5b77dbd7c4-k9nwt,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 03c2aa31-a311-421a-bb11-2797c4bb051f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 36b2d8aa,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a82907ea877fc9f3afc8844841f9b2c38a86e5fd89715eba5b363490401c61,PodSandboxId:cbaa5633be2fca005a75f962c7995131efbcbb8b135949e88dbe03cd19290dcd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1712784743696407518,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: d06609ad-b7cc-4be0-8572-437ef14d80dd,},Annotations:map[string]string{io.kubernetes.container.hash: 249ac266,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84ae8d46a47a55de65ebde03fd49787c27f2e10ec269d741c7d3d1c36d6e8d1e,PodSandboxId:9ab287ad6a94f9ae36e7d3e9f9f1d98be3bafa49867a3ffe6f5b7ee13f01a7fc,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1712784713674466462,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-7d69788767-9dhhs,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0b164b8c-7871-4b0d-b881-5ed84e13a15a,},Annotations:map[string]string{io.kubernetes.container.hash: 252a6ef3,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20de0805033d96fa7f5b70c70da2b12463214a92b0568804ef3ae6337f8649e2,PodSandboxId:2f30d22b6c4ab01a42e96959ef6e79a77901e545666dea18baf1b5e8806317b7,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1712784707493532151,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tkmz9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ad555aff-2ec2-4eb1-8cd3-555e4f6f24a7,},Annotations:map[string]string{io.kubernetes.container.hash: 30d4c80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e4794e7aada0a7d56a5e7352c1d6865dcc416124dca35ed607ce10a47a38b0c,PodSandboxId:cea35032aa829e502692bdc93d0d0bc9c4433676218f65254f837695a1459d8e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:
1712784693947733492,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-tx59g,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c59430b9-57f9-4224-b2c7-b6c596913276,},Annotations:map[string]string{io.kubernetes.container.hash: 352b4336,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:572817371ac952327162e642a5463b35e670a44e91aee7b494c7bdb1345b52ea,PodSandboxId:02adec0661fcaedacfc26c1571ed03d84d71ca80b9cc57a27365ea62105fa3aa,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,
CreatedAt:1712784689867515142,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-sznp7,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 136b30cb-878d-43ec-9fe7-77f52732f659,},Annotations:map[string]string{io.kubernetes.container.hash: 759b8f49,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e493ed52efe76f855ff53412722cac7fb434bb8578d8fc99eeab1f8fff270b19,PodSandboxId:a9aeaa91dfdd0eb51eb2a6680b90452ea3c52c5f524c6fcd67e51f786cf06b71,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1712784678903424423,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-bg294,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e744abe7-fbc6-4fb3-81e4-e71a1456d009,},Annotations:map[string]string{io.kubernetes.container.hash: efe5f7a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72cde0b3ae0bccfb3f13573eba3ccc04f1dc87f130ba80cb6121feaac314f80f,PodSandboxId:23fe1207e104726417e1b1556b560cc50b5b6b9338ebfe36dd592f0e3cadc3de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map
[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712784632581388687,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7d65e1-27c4-4027-ac9d-d3ccfd94776e,},Annotations:map[string]string{io.kubernetes.container.hash: 3a733b23,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddb35ca17bb07ad24682875b2fd28d1d5da31ff44048b35a4fcb85db69ba98b9,PodSandboxId:10145100e1819108e3ed5954c8eb9f2af0f0cede4b36918d90d8fc542844a2c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712784630191757408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5whqs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e02909ca-d926-479c-9994-d31142224b51,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8e53ea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ce81206bf21bf1cd12a34d2d2ba3fb437b15c0c608dca9b962d453ec9d8a50,PodSandboxId:d420d5dfc0ce02b794f4a29f82fd72e0e0bd1e63309a1f0a6e
9dd1951d5fb9e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712784626711649324,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gx5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55cff16f-64e2-4c97-ad79-9eb5f6fee0cc,},Annotations:map[string]string{io.kubernetes.container.hash: d1d91233,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b79823ce30e3a9c5a750a98d7f41540b2f13f3946b8a809a43aa3c4f78c262c7,PodSandboxId:048bf5e73c5307b923fed9f90bc8cd0ceadf8e45f016978935baddb3f42dfcb5,Metadata:&Containe
rMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712784606596095643,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-577364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec3082f7957ec74ca05ca14341794fb,},Annotations:map[string]string{io.kubernetes.container.hash: ce72760a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1db1f1e91e036ebc507da67d76beebb0a7f95fb578807d53546e732d89930655,PodSandboxId:96debf077442716ab2ee5c90db60be964481828fd05001732677f051b6618e9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image
:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712784606586336069,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-577364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e07d5d42aa952f58acaf881612511c87,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff191f808df8733ede34b30e98a0c7c983ce447b61c42112c71c0a24fd03a359,PodSandboxId:dd39e05b7b22923b7488539cb1f66aec09bd3e2cac6d1663810c3d257b146c2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image
:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712784606571600946,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-577364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e2add46714c7a6ceb25423752a4f3f5,},Annotations:map[string]string{io.kubernetes.container.hash: 4f957258,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9116f2b3094787cfb8cfc2b56f700c3f7be759d74e6590811f8a236dddfc3e9,PodSandboxId:1b9d1664ec824d48d70852382643e18dfef4929eb6fc623897fdd4406c24dc4f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25
da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712784606483497308,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-577364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5c8e1de0f344c61bf9108b27e678629,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=08c42e2f-a929-48f9-8c24-78cf79584783 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:34:54 addons-577364 crio[682]: time="2024-04-10 21:34:54.101563045Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=92aacd1b-3426-4687-a314-090a58d171c6 name=/runtime.v1.RuntimeService/Version
	Apr 10 21:34:54 addons-577364 crio[682]: time="2024-04-10 21:34:54.101845294Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=92aacd1b-3426-4687-a314-090a58d171c6 name=/runtime.v1.RuntimeService/Version
	Apr 10 21:34:54 addons-577364 crio[682]: time="2024-04-10 21:34:54.102773473Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=02bf6887-9780-4fe5-9429-1ee45d5dc506 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 21:34:54 addons-577364 crio[682]: time="2024-04-10 21:34:54.104032587Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712784894103981888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:573323,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=02bf6887-9780-4fe5-9429-1ee45d5dc506 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 21:34:54 addons-577364 crio[682]: time="2024-04-10 21:34:54.104614347Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2693020d-2f0c-4a3d-a110-b3038ffc2c74 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:34:54 addons-577364 crio[682]: time="2024-04-10 21:34:54.104705698Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2693020d-2f0c-4a3d-a110-b3038ffc2c74 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:34:54 addons-577364 crio[682]: time="2024-04-10 21:34:54.105225856Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b464feb4f65676020fbac1f2b293b76db1936b532b894f5f05e3a7feb196a98,PodSandboxId:14da1c348b68e8506114868089c13e481453f6b5ab87344ddbbdea1de11a2fbe,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1712784886938372115,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-bhxbc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b8dbc81-dc63-4d0a-b174-bbb4874bf564,},Annotations:map[string]string{io.kubernetes.container.hash: 594f3cf3,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3935dae64c9d299733cf9ac48c185da9697ee610393c6fee60e9240b9d5609cc,PodSandboxId:2cc666785169f394cb57b3aa535a6a51a027daaed963af0b88fa55c7c55ca67e,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1712784761947941797,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5b77dbd7c4-k9nwt,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 03c2aa31-a311-421a-bb11-2797c4bb051f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 36b2d8aa,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a82907ea877fc9f3afc8844841f9b2c38a86e5fd89715eba5b363490401c61,PodSandboxId:cbaa5633be2fca005a75f962c7995131efbcbb8b135949e88dbe03cd19290dcd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1712784743696407518,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: d06609ad-b7cc-4be0-8572-437ef14d80dd,},Annotations:map[string]string{io.kubernetes.container.hash: 249ac266,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84ae8d46a47a55de65ebde03fd49787c27f2e10ec269d741c7d3d1c36d6e8d1e,PodSandboxId:9ab287ad6a94f9ae36e7d3e9f9f1d98be3bafa49867a3ffe6f5b7ee13f01a7fc,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1712784713674466462,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-7d69788767-9dhhs,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0b164b8c-7871-4b0d-b881-5ed84e13a15a,},Annotations:map[string]string{io.kubernetes.container.hash: 252a6ef3,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20de0805033d96fa7f5b70c70da2b12463214a92b0568804ef3ae6337f8649e2,PodSandboxId:2f30d22b6c4ab01a42e96959ef6e79a77901e545666dea18baf1b5e8806317b7,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1712784707493532151,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tkmz9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ad555aff-2ec2-4eb1-8cd3-555e4f6f24a7,},Annotations:map[string]string{io.kubernetes.container.hash: 30d4c80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e4794e7aada0a7d56a5e7352c1d6865dcc416124dca35ed607ce10a47a38b0c,PodSandboxId:cea35032aa829e502692bdc93d0d0bc9c4433676218f65254f837695a1459d8e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:
1712784693947733492,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-tx59g,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c59430b9-57f9-4224-b2c7-b6c596913276,},Annotations:map[string]string{io.kubernetes.container.hash: 352b4336,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:572817371ac952327162e642a5463b35e670a44e91aee7b494c7bdb1345b52ea,PodSandboxId:02adec0661fcaedacfc26c1571ed03d84d71ca80b9cc57a27365ea62105fa3aa,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,
CreatedAt:1712784689867515142,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-sznp7,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 136b30cb-878d-43ec-9fe7-77f52732f659,},Annotations:map[string]string{io.kubernetes.container.hash: 759b8f49,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e493ed52efe76f855ff53412722cac7fb434bb8578d8fc99eeab1f8fff270b19,PodSandboxId:a9aeaa91dfdd0eb51eb2a6680b90452ea3c52c5f524c6fcd67e51f786cf06b71,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1712784678903424423,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-bg294,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e744abe7-fbc6-4fb3-81e4-e71a1456d009,},Annotations:map[string]string{io.kubernetes.container.hash: efe5f7a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72cde0b3ae0bccfb3f13573eba3ccc04f1dc87f130ba80cb6121feaac314f80f,PodSandboxId:23fe1207e104726417e1b1556b560cc50b5b6b9338ebfe36dd592f0e3cadc3de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map
[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712784632581388687,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7d65e1-27c4-4027-ac9d-d3ccfd94776e,},Annotations:map[string]string{io.kubernetes.container.hash: 3a733b23,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddb35ca17bb07ad24682875b2fd28d1d5da31ff44048b35a4fcb85db69ba98b9,PodSandboxId:10145100e1819108e3ed5954c8eb9f2af0f0cede4b36918d90d8fc542844a2c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712784630191757408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5whqs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e02909ca-d926-479c-9994-d31142224b51,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8e53ea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ce81206bf21bf1cd12a34d2d2ba3fb437b15c0c608dca9b962d453ec9d8a50,PodSandboxId:d420d5dfc0ce02b794f4a29f82fd72e0e0bd1e63309a1f0a6e
9dd1951d5fb9e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712784626711649324,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gx5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55cff16f-64e2-4c97-ad79-9eb5f6fee0cc,},Annotations:map[string]string{io.kubernetes.container.hash: d1d91233,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b79823ce30e3a9c5a750a98d7f41540b2f13f3946b8a809a43aa3c4f78c262c7,PodSandboxId:048bf5e73c5307b923fed9f90bc8cd0ceadf8e45f016978935baddb3f42dfcb5,Metadata:&Containe
rMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712784606596095643,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-577364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec3082f7957ec74ca05ca14341794fb,},Annotations:map[string]string{io.kubernetes.container.hash: ce72760a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1db1f1e91e036ebc507da67d76beebb0a7f95fb578807d53546e732d89930655,PodSandboxId:96debf077442716ab2ee5c90db60be964481828fd05001732677f051b6618e9c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image
:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712784606586336069,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-577364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e07d5d42aa952f58acaf881612511c87,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff191f808df8733ede34b30e98a0c7c983ce447b61c42112c71c0a24fd03a359,PodSandboxId:dd39e05b7b22923b7488539cb1f66aec09bd3e2cac6d1663810c3d257b146c2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image
:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712784606571600946,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-577364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e2add46714c7a6ceb25423752a4f3f5,},Annotations:map[string]string{io.kubernetes.container.hash: 4f957258,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9116f2b3094787cfb8cfc2b56f700c3f7be759d74e6590811f8a236dddfc3e9,PodSandboxId:1b9d1664ec824d48d70852382643e18dfef4929eb6fc623897fdd4406c24dc4f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25
da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712784606483497308,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-577364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5c8e1de0f344c61bf9108b27e678629,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2693020d-2f0c-4a3d-a110-b3038ffc2c74 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7b464feb4f656       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      7 seconds ago       Running             hello-world-app           0                   14da1c348b68e       hello-world-app-5d77478584-bhxbc
	3935dae64c9d2       ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f                        2 minutes ago       Running             headlamp                  0                   2cc666785169f       headlamp-5b77dbd7c4-k9nwt
	d0a82907ea877       docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742                              2 minutes ago       Running             nginx                     0                   cbaa5633be2fc       nginx
	84ae8d46a47a5       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 3 minutes ago       Running             gcp-auth                  0                   9ab287ad6a94f       gcp-auth-7d69788767-9dhhs
	20de0805033d9       b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135                                                             3 minutes ago       Exited              patch                     2                   2f30d22b6c4ab       ingress-nginx-admission-patch-tkmz9
	0e4794e7aada0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   3 minutes ago       Exited              create                    0                   cea35032aa829       ingress-nginx-admission-create-tx59g
	572817371ac95       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago       Running             yakd                      0                   02adec0661fca       yakd-dashboard-9947fc6bf-sznp7
	e493ed52efe76       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   a9aeaa91dfdd0       local-path-provisioner-78b46b4d5c-bg294
	72cde0b3ae0bc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   23fe1207e1047       storage-provisioner
	ddb35ca17bb07       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             4 minutes ago       Running             coredns                   0                   10145100e1819       coredns-76f75df574-5whqs
	79ce81206bf21       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                                             4 minutes ago       Running             kube-proxy                0                   d420d5dfc0ce0       kube-proxy-6gx5s
	b79823ce30e3a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             4 minutes ago       Running             etcd                      0                   048bf5e73c530       etcd-addons-577364
	1db1f1e91e036       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                                             4 minutes ago       Running             kube-scheduler            0                   96debf0774427       kube-scheduler-addons-577364
	ff191f808df87       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                                             4 minutes ago       Running             kube-apiserver            0                   dd39e05b7b229       kube-apiserver-addons-577364
	c9116f2b30947       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                                             4 minutes ago       Running             kube-controller-manager   0                   1b9d1664ec824       kube-controller-manager-addons-577364
	
	
	==> coredns [ddb35ca17bb07ad24682875b2fd28d1d5da31ff44048b35a4fcb85db69ba98b9] <==
	[INFO] 10.244.0.21:34595 - 11287 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000208145s
	[INFO] 10.244.0.21:55494 - 46562 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000196562s
	[INFO] 10.244.0.21:34595 - 59103 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000153543s
	[INFO] 10.244.0.21:55494 - 19785 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000055109s
	[INFO] 10.244.0.21:34595 - 1143 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000305045s
	[INFO] 10.244.0.21:34595 - 29698 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000065426s
	[INFO] 10.244.0.21:34595 - 34844 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000362431s
	[INFO] 10.244.0.21:55494 - 47163 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00005845s
	[INFO] 10.244.0.21:55494 - 28721 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000121674s
	[INFO] 10.244.0.21:55494 - 41837 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000245651s
	[INFO] 10.244.0.21:55494 - 40455 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000104589s
	[INFO] 10.244.0.21:46633 - 9836 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000091767s
	[INFO] 10.244.0.21:55710 - 30503 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000057131s
	[INFO] 10.244.0.21:46633 - 17550 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000054856s
	[INFO] 10.244.0.21:55710 - 55882 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000057164s
	[INFO] 10.244.0.21:46633 - 45152 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000075222s
	[INFO] 10.244.0.21:46633 - 50646 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00003881s
	[INFO] 10.244.0.21:46633 - 31424 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037045s
	[INFO] 10.244.0.21:55710 - 38129 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00006105s
	[INFO] 10.244.0.21:55710 - 44724 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000047219s
	[INFO] 10.244.0.21:46633 - 39049 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000055317s
	[INFO] 10.244.0.21:55710 - 45783 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000035615s
	[INFO] 10.244.0.21:46633 - 53318 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000060767s
	[INFO] 10.244.0.21:55710 - 37009 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000107955s
	[INFO] 10.244.0.21:55710 - 44152 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000062371s
	
	
	==> describe nodes <==
	Name:               addons-577364
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-577364
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2
	                    minikube.k8s.io/name=addons-577364
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_10T21_30_12_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-577364
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Apr 2024 21:30:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-577364
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Apr 2024 21:34:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Apr 2024 21:33:17 +0000   Wed, 10 Apr 2024 21:30:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Apr 2024 21:33:17 +0000   Wed, 10 Apr 2024 21:30:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Apr 2024 21:33:17 +0000   Wed, 10 Apr 2024 21:30:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Apr 2024 21:33:17 +0000   Wed, 10 Apr 2024 21:30:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.209
	  Hostname:    addons-577364
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 e7a3a746deef40f18ee8f1809040fef0
	  System UUID:                e7a3a746-deef-40f1-8ee8-f1809040fef0
	  Boot ID:                    be090d4a-a272-480a-ad6d-ef3ee3c99f91
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-bhxbc           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  gcp-auth                    gcp-auth-7d69788767-9dhhs                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	  headlamp                    headlamp-5b77dbd7c4-k9nwt                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	  kube-system                 coredns-76f75df574-5whqs                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m29s
	  kube-system                 etcd-addons-577364                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m44s
	  kube-system                 kube-apiserver-addons-577364               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	  kube-system                 kube-controller-manager-addons-577364      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	  kube-system                 kube-proxy-6gx5s                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	  kube-system                 kube-scheduler-addons-577364               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  local-path-storage          local-path-provisioner-78b46b4d5c-bg294    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-sznp7             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m26s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  4m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m48s (x8 over 4m49s)  kubelet          Node addons-577364 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m48s (x8 over 4m49s)  kubelet          Node addons-577364 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m48s (x7 over 4m49s)  kubelet          Node addons-577364 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m42s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m42s                  kubelet          Node addons-577364 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m42s                  kubelet          Node addons-577364 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m42s                  kubelet          Node addons-577364 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m42s                  kubelet          Node addons-577364 status is now: NodeReady
	  Normal  RegisteredNode           4m29s                  node-controller  Node addons-577364 event: Registered Node addons-577364 in Controller
	
	
	==> dmesg <==
	[ +13.877274] systemd-fstab-generator[1501]: Ignoring "noauto" option for root device
	[  +0.147956] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.056846] kauditd_printk_skb: 116 callbacks suppressed
	[  +5.054307] kauditd_printk_skb: 103 callbacks suppressed
	[  +5.010837] kauditd_printk_skb: 77 callbacks suppressed
	[ +12.614395] kauditd_printk_skb: 27 callbacks suppressed
	[Apr10 21:31] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.751162] kauditd_printk_skb: 28 callbacks suppressed
	[  +8.561883] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.579539] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.368428] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.683091] kauditd_printk_skb: 54 callbacks suppressed
	[  +6.189772] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.091896] kauditd_printk_skb: 12 callbacks suppressed
	[Apr10 21:32] kauditd_printk_skb: 53 callbacks suppressed
	[  +5.189476] kauditd_printk_skb: 25 callbacks suppressed
	[  +6.060524] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.250728] kauditd_printk_skb: 47 callbacks suppressed
	[ +18.256881] kauditd_printk_skb: 6 callbacks suppressed
	[Apr10 21:33] kauditd_printk_skb: 10 callbacks suppressed
	[ +28.544326] kauditd_printk_skb: 11 callbacks suppressed
	[  +8.890435] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.481539] kauditd_printk_skb: 33 callbacks suppressed
	[Apr10 21:34] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.064370] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [b79823ce30e3a9c5a750a98d7f41540b2f13f3946b8a809a43aa3c4f78c262c7] <==
	{"level":"warn","ts":"2024-04-10T21:31:45.482271Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"282.61636ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-04-10T21:31:45.482329Z","caller":"traceutil/trace.go:171","msg":"trace[1767232242] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1117; }","duration":"282.692622ms","start":"2024-04-10T21:31:45.199627Z","end":"2024-04-10T21:31:45.48232Z","steps":["trace[1767232242] 'agreement among raft nodes before linearized reading'  (duration: 282.588617ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-10T21:31:45.482457Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"351.922725ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-10T21:31:45.482499Z","caller":"traceutil/trace.go:171","msg":"trace[1956707684] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1117; }","duration":"351.982462ms","start":"2024-04-10T21:31:45.13051Z","end":"2024-04-10T21:31:45.482493Z","steps":["trace[1956707684] 'agreement among raft nodes before linearized reading'  (duration: 351.929345ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-10T21:31:45.482552Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-10T21:31:45.130497Z","time spent":"352.021836ms","remote":"127.0.0.1:37820","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-04-10T21:31:45.484496Z","caller":"traceutil/trace.go:171","msg":"trace[235258574] transaction","detail":"{read_only:false; response_revision:1118; number_of_response:1; }","duration":"159.863239ms","start":"2024-04-10T21:31:45.324619Z","end":"2024-04-10T21:31:45.484482Z","steps":["trace[235258574] 'process raft request'  (duration: 158.318558ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-10T21:31:48.475786Z","caller":"traceutil/trace.go:171","msg":"trace[1727146268] linearizableReadLoop","detail":"{readStateIndex:1161; appliedIndex:1160; }","duration":"276.142862ms","start":"2024-04-10T21:31:48.19963Z","end":"2024-04-10T21:31:48.475772Z","steps":["trace[1727146268] 'read index received'  (duration: 276.009672ms)","trace[1727146268] 'applied index is now lower than readState.Index'  (duration: 132.582µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-10T21:31:48.475937Z","caller":"traceutil/trace.go:171","msg":"trace[2024274091] transaction","detail":"{read_only:false; response_revision:1125; number_of_response:1; }","duration":"436.606257ms","start":"2024-04-10T21:31:48.039323Z","end":"2024-04-10T21:31:48.475929Z","steps":["trace[2024274091] 'process raft request'  (duration: 436.361968ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-10T21:31:48.47602Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-10T21:31:48.039303Z","time spent":"436.654905ms","remote":"127.0.0.1:37992","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4437,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-tkmz9\" mod_revision:1036 > success:<request_put:<key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-tkmz9\" value_size:4365 >> failure:<request_range:<key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-tkmz9\" > >"}
	{"level":"warn","ts":"2024-04-10T21:31:48.476284Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.289852ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14571"}
	{"level":"info","ts":"2024-04-10T21:31:48.479001Z","caller":"traceutil/trace.go:171","msg":"trace[1441046484] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1125; }","duration":"176.007058ms","start":"2024-04-10T21:31:48.302978Z","end":"2024-04-10T21:31:48.478985Z","steps":["trace[1441046484] 'agreement among raft nodes before linearized reading'  (duration: 173.222986ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-10T21:31:48.476382Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"276.748431ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-04-10T21:31:48.479175Z","caller":"traceutil/trace.go:171","msg":"trace[517539549] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1125; }","duration":"279.542861ms","start":"2024-04-10T21:31:48.199625Z","end":"2024-04-10T21:31:48.479168Z","steps":["trace[517539549] 'agreement among raft nodes before linearized reading'  (duration: 276.687689ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-10T21:31:53.479561Z","caller":"traceutil/trace.go:171","msg":"trace[1417745426] transaction","detail":"{read_only:false; response_revision:1162; number_of_response:1; }","duration":"109.373925ms","start":"2024-04-10T21:31:53.370169Z","end":"2024-04-10T21:31:53.479543Z","steps":["trace[1417745426] 'process raft request'  (duration: 109.094524ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-10T21:31:56.757519Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"461.046754ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6276767503553618691 > lease_revoke:<id:571b8ec9eb0761ec>","response":"size:28"}
	{"level":"info","ts":"2024-04-10T21:31:56.757783Z","caller":"traceutil/trace.go:171","msg":"trace[185231991] linearizableReadLoop","detail":"{readStateIndex:1222; appliedIndex:1221; }","duration":"244.651312ms","start":"2024-04-10T21:31:56.513119Z","end":"2024-04-10T21:31:56.75777Z","steps":["trace[185231991] 'read index received'  (duration: 23.498µs)","trace[185231991] 'applied index is now lower than readState.Index'  (duration: 244.626813ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-10T21:31:56.761017Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"247.603511ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85617"}
	{"level":"info","ts":"2024-04-10T21:31:56.761914Z","caller":"traceutil/trace.go:171","msg":"trace[1606613779] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1184; }","duration":"248.471669ms","start":"2024-04-10T21:31:56.513095Z","end":"2024-04-10T21:31:56.761567Z","steps":["trace[1606613779] 'agreement among raft nodes before linearized reading'  (duration: 247.141495ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-10T21:32:40.866287Z","caller":"traceutil/trace.go:171","msg":"trace[791575504] linearizableReadLoop","detail":"{readStateIndex:1550; appliedIndex:1549; }","duration":"176.125206ms","start":"2024-04-10T21:32:40.690042Z","end":"2024-04-10T21:32:40.866167Z","steps":["trace[791575504] 'read index received'  (duration: 175.934497ms)","trace[791575504] 'applied index is now lower than readState.Index'  (duration: 190.178µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-10T21:32:40.867116Z","caller":"traceutil/trace.go:171","msg":"trace[1385425396] transaction","detail":"{read_only:false; response_revision:1496; number_of_response:1; }","duration":"347.702456ms","start":"2024-04-10T21:32:40.5194Z","end":"2024-04-10T21:32:40.867102Z","steps":["trace[1385425396] 'process raft request'  (duration: 346.61659ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-10T21:32:40.867628Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.571785ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/hpvc.17c508a0c03f51e1\" ","response":"range_response_count:1 size:887"}
	{"level":"info","ts":"2024-04-10T21:32:40.86831Z","caller":"traceutil/trace.go:171","msg":"trace[1832746790] range","detail":"{range_begin:/registry/events/default/hpvc.17c508a0c03f51e1; range_end:; response_count:1; response_revision:1496; }","duration":"178.285986ms","start":"2024-04-10T21:32:40.690011Z","end":"2024-04-10T21:32:40.868297Z","steps":["trace[1832746790] 'agreement among raft nodes before linearized reading'  (duration: 177.515474ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-10T21:32:40.868229Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-10T21:32:40.519384Z","time spent":"347.829025ms","remote":"127.0.0.1:37884","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":778,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/registry-7rzv5.17c5088bfe53407b\" mod_revision:949 > success:<request_put:<key:\"/registry/events/kube-system/registry-7rzv5.17c5088bfe53407b\" value_size:700 lease:6276767503553618971 >> failure:<request_range:<key:\"/registry/events/kube-system/registry-7rzv5.17c5088bfe53407b\" > >"}
	{"level":"info","ts":"2024-04-10T21:33:00.82003Z","caller":"traceutil/trace.go:171","msg":"trace[1434563987] transaction","detail":"{read_only:false; response_revision:1557; number_of_response:1; }","duration":"397.750691ms","start":"2024-04-10T21:33:00.422263Z","end":"2024-04-10T21:33:00.820014Z","steps":["trace[1434563987] 'process raft request'  (duration: 397.52842ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-10T21:33:00.820177Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-10T21:33:00.422245Z","time spent":"397.870065ms","remote":"127.0.0.1:37992","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4319,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-s9dvf\" mod_revision:1554 > success:<request_put:<key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-s9dvf\" value_size:4248 >> failure:<request_range:<key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-s9dvf\" > >"}
	
	
	==> gcp-auth [84ae8d46a47a55de65ebde03fd49787c27f2e10ec269d741c7d3d1c36d6e8d1e] <==
	2024/04/10 21:31:53 GCP Auth Webhook started!
	2024/04/10 21:31:59 Ready to marshal response ...
	2024/04/10 21:31:59 Ready to write response ...
	2024/04/10 21:31:59 Ready to marshal response ...
	2024/04/10 21:31:59 Ready to write response ...
	2024/04/10 21:32:11 Ready to marshal response ...
	2024/04/10 21:32:11 Ready to write response ...
	2024/04/10 21:32:11 Ready to marshal response ...
	2024/04/10 21:32:11 Ready to write response ...
	2024/04/10 21:32:11 Ready to marshal response ...
	2024/04/10 21:32:11 Ready to write response ...
	2024/04/10 21:32:19 Ready to marshal response ...
	2024/04/10 21:32:19 Ready to write response ...
	2024/04/10 21:32:31 Ready to marshal response ...
	2024/04/10 21:32:31 Ready to write response ...
	2024/04/10 21:32:31 Ready to marshal response ...
	2024/04/10 21:32:31 Ready to write response ...
	2024/04/10 21:32:31 Ready to marshal response ...
	2024/04/10 21:32:31 Ready to write response ...
	2024/04/10 21:32:55 Ready to marshal response ...
	2024/04/10 21:32:55 Ready to write response ...
	2024/04/10 21:33:32 Ready to marshal response ...
	2024/04/10 21:33:32 Ready to write response ...
	2024/04/10 21:34:42 Ready to marshal response ...
	2024/04/10 21:34:42 Ready to write response ...
	
	
	==> kernel <==
	 21:34:54 up 5 min,  0 users,  load average: 0.59, 1.05, 0.53
	Linux addons-577364 5.10.207 #1 SMP Wed Apr 10 14:57:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ff191f808df8733ede34b30e98a0c7c983ce447b61c42112c71c0a24fd03a359] <==
	I0410 21:31:22.442233       1 handler.go:275] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0410 21:31:45.492022       1 trace.go:236] Trace[1231062633]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:1b126940-5a41-49b0-89f8-1902631a8b56,client:192.168.39.209,api-group:,api-version:v1,name:addons-577364,subresource:status,namespace:,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/addons-577364/status,user-agent:kubelet/v1.29.3 (linux/amd64) kubernetes/6813625,verb:PATCH (10-Apr-2024 21:31:44.984) (total time: 507ms):
	Trace[1231062633]: ---"Object stored in database" 491ms (21:31:45.483)
	Trace[1231062633]: [507.922354ms] [507.922354ms] END
	I0410 21:32:19.285912       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0410 21:32:19.464476       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.42.252"}
	I0410 21:32:23.383313       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0410 21:32:25.103081       1 handler.go:275] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0410 21:32:26.156035       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0410 21:32:31.252213       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.212.65"}
	I0410 21:33:11.021131       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0410 21:33:49.691779       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0410 21:33:49.692005       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0410 21:33:49.712320       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0410 21:33:49.712385       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0410 21:33:49.807306       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0410 21:33:49.807384       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0410 21:33:49.879752       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0410 21:33:49.881191       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0410 21:33:50.830853       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0410 21:33:50.880738       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0410 21:33:50.895971       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0410 21:34:43.010231       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.159.225"}
	E0410 21:34:46.373198       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0410 21:34:46.472490       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [c9116f2b3094787cfb8cfc2b56f700c3f7be759d74e6590811f8a236dddfc3e9] <==
	W0410 21:34:06.332413       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0410 21:34:06.332444       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0410 21:34:09.941030       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0410 21:34:09.941221       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0410 21:34:11.232261       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0410 21:34:11.232318       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0410 21:34:19.416920       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0410 21:34:19.417055       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0410 21:34:34.347632       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0410 21:34:34.347691       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0410 21:34:35.372774       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0410 21:34:35.372950       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0410 21:34:41.521484       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0410 21:34:41.521607       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0410 21:34:42.810288       1 event.go:376] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0410 21:34:42.845113       1 event.go:376] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-bhxbc"
	I0410 21:34:42.860041       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="47.891896ms"
	I0410 21:34:42.872547       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="11.855113ms"
	I0410 21:34:42.872641       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="38.43µs"
	I0410 21:34:42.889930       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="58.915µs"
	I0410 21:34:46.145527       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0410 21:34:46.160522       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="6.847µs"
	I0410 21:34:46.171069       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0410 21:34:47.246136       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="12.18214ms"
	I0410 21:34:47.246528       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="38.599µs"
	
	
	==> kube-proxy [79ce81206bf21bf1cd12a34d2d2ba3fb437b15c0c608dca9b962d453ec9d8a50] <==
	I0410 21:30:27.235066       1 server_others.go:72] "Using iptables proxy"
	I0410 21:30:27.248384       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.209"]
	I0410 21:30:27.387214       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0410 21:30:27.387251       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0410 21:30:27.387264       1 server_others.go:168] "Using iptables Proxier"
	I0410 21:30:27.392950       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0410 21:30:27.393083       1 server.go:865] "Version info" version="v1.29.3"
	I0410 21:30:27.393097       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0410 21:30:27.393887       1 config.go:188] "Starting service config controller"
	I0410 21:30:27.393930       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0410 21:30:27.393950       1 config.go:97] "Starting endpoint slice config controller"
	I0410 21:30:27.393953       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0410 21:30:27.394389       1 config.go:315] "Starting node config controller"
	I0410 21:30:27.394396       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0410 21:30:27.495959       1 shared_informer.go:318] Caches are synced for node config
	I0410 21:30:27.496025       1 shared_informer.go:318] Caches are synced for service config
	I0410 21:30:27.496045       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1db1f1e91e036ebc507da67d76beebb0a7f95fb578807d53546e732d89930655] <==
	W0410 21:30:09.412031       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0410 21:30:09.412064       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0410 21:30:10.266569       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0410 21:30:10.268884       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0410 21:30:10.273136       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0410 21:30:10.273179       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0410 21:30:10.292178       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0410 21:30:10.292380       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0410 21:30:10.294987       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0410 21:30:10.295032       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0410 21:30:10.343268       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0410 21:30:10.343318       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0410 21:30:10.389905       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0410 21:30:10.389951       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0410 21:30:10.416021       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0410 21:30:10.416168       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0410 21:30:10.587090       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0410 21:30:10.587209       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0410 21:30:10.616277       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0410 21:30:10.616499       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0410 21:30:10.629421       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0410 21:30:10.629656       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0410 21:30:10.668890       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0410 21:30:10.669044       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0410 21:30:13.102684       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 10 21:34:42 addons-577364 kubelet[1285]: I0410 21:34:42.853141    1285 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3fc2d72-68cd-4a4a-a959-249492ed517d" containerName="hostpath"
	Apr 10 21:34:42 addons-577364 kubelet[1285]: I0410 21:34:42.956737    1285 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9b8dbc81-dc63-4d0a-b174-bbb4874bf564-gcp-creds\") pod \"hello-world-app-5d77478584-bhxbc\" (UID: \"9b8dbc81-dc63-4d0a-b174-bbb4874bf564\") " pod="default/hello-world-app-5d77478584-bhxbc"
	Apr 10 21:34:42 addons-577364 kubelet[1285]: I0410 21:34:42.956946    1285 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bmqv\" (UniqueName: \"kubernetes.io/projected/9b8dbc81-dc63-4d0a-b174-bbb4874bf564-kube-api-access-5bmqv\") pod \"hello-world-app-5d77478584-bhxbc\" (UID: \"9b8dbc81-dc63-4d0a-b174-bbb4874bf564\") " pod="default/hello-world-app-5d77478584-bhxbc"
	Apr 10 21:34:44 addons-577364 kubelet[1285]: I0410 21:34:44.566856    1285 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlcq6\" (UniqueName: \"kubernetes.io/projected/617933ce-8840-4242-bbed-1e88480de282-kube-api-access-mlcq6\") pod \"617933ce-8840-4242-bbed-1e88480de282\" (UID: \"617933ce-8840-4242-bbed-1e88480de282\") "
	Apr 10 21:34:44 addons-577364 kubelet[1285]: I0410 21:34:44.571419    1285 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/617933ce-8840-4242-bbed-1e88480de282-kube-api-access-mlcq6" (OuterVolumeSpecName: "kube-api-access-mlcq6") pod "617933ce-8840-4242-bbed-1e88480de282" (UID: "617933ce-8840-4242-bbed-1e88480de282"). InnerVolumeSpecName "kube-api-access-mlcq6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 10 21:34:44 addons-577364 kubelet[1285]: I0410 21:34:44.668006    1285 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mlcq6\" (UniqueName: \"kubernetes.io/projected/617933ce-8840-4242-bbed-1e88480de282-kube-api-access-mlcq6\") on node \"addons-577364\" DevicePath \"\""
	Apr 10 21:34:45 addons-577364 kubelet[1285]: I0410 21:34:45.203211    1285 scope.go:117] "RemoveContainer" containerID="1c3f5250e5b561e375b5505962d6d93493245785b44d4f787e87e9bf0accefd3"
	Apr 10 21:34:45 addons-577364 kubelet[1285]: I0410 21:34:45.324040    1285 scope.go:117] "RemoveContainer" containerID="1c3f5250e5b561e375b5505962d6d93493245785b44d4f787e87e9bf0accefd3"
	Apr 10 21:34:45 addons-577364 kubelet[1285]: E0410 21:34:45.326464    1285 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c3f5250e5b561e375b5505962d6d93493245785b44d4f787e87e9bf0accefd3\": container with ID starting with 1c3f5250e5b561e375b5505962d6d93493245785b44d4f787e87e9bf0accefd3 not found: ID does not exist" containerID="1c3f5250e5b561e375b5505962d6d93493245785b44d4f787e87e9bf0accefd3"
	Apr 10 21:34:45 addons-577364 kubelet[1285]: I0410 21:34:45.326544    1285 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c3f5250e5b561e375b5505962d6d93493245785b44d4f787e87e9bf0accefd3"} err="failed to get container status \"1c3f5250e5b561e375b5505962d6d93493245785b44d4f787e87e9bf0accefd3\": rpc error: code = NotFound desc = could not find container \"1c3f5250e5b561e375b5505962d6d93493245785b44d4f787e87e9bf0accefd3\": container with ID starting with 1c3f5250e5b561e375b5505962d6d93493245785b44d4f787e87e9bf0accefd3 not found: ID does not exist"
	Apr 10 21:34:46 addons-577364 kubelet[1285]: I0410 21:34:46.515572    1285 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="617933ce-8840-4242-bbed-1e88480de282" path="/var/lib/kubelet/pods/617933ce-8840-4242-bbed-1e88480de282/volumes"
	Apr 10 21:34:46 addons-577364 kubelet[1285]: I0410 21:34:46.516112    1285 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad555aff-2ec2-4eb1-8cd3-555e4f6f24a7" path="/var/lib/kubelet/pods/ad555aff-2ec2-4eb1-8cd3-555e4f6f24a7/volumes"
	Apr 10 21:34:46 addons-577364 kubelet[1285]: I0410 21:34:46.516581    1285 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c59430b9-57f9-4224-b2c7-b6c596913276" path="/var/lib/kubelet/pods/c59430b9-57f9-4224-b2c7-b6c596913276/volumes"
	Apr 10 21:34:47 addons-577364 kubelet[1285]: I0410 21:34:47.232317    1285 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-bhxbc" podStartSLOduration=1.754276005 podStartE2EDuration="5.232260061s" podCreationTimestamp="2024-04-10 21:34:42 +0000 UTC" firstStartedPulling="2024-04-10 21:34:43.443674964 +0000 UTC m=+271.087891160" lastFinishedPulling="2024-04-10 21:34:46.921659021 +0000 UTC m=+274.565875216" observedRunningTime="2024-04-10 21:34:47.232059702 +0000 UTC m=+274.876275914" watchObservedRunningTime="2024-04-10 21:34:47.232260061 +0000 UTC m=+274.876476275"
	Apr 10 21:34:49 addons-577364 kubelet[1285]: I0410 21:34:49.513766    1285 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4gzm\" (UniqueName: \"kubernetes.io/projected/280a401b-5f4e-4e53-b170-11417823715b-kube-api-access-j4gzm\") pod \"280a401b-5f4e-4e53-b170-11417823715b\" (UID: \"280a401b-5f4e-4e53-b170-11417823715b\") "
	Apr 10 21:34:49 addons-577364 kubelet[1285]: I0410 21:34:49.513905    1285 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/280a401b-5f4e-4e53-b170-11417823715b-webhook-cert\") pod \"280a401b-5f4e-4e53-b170-11417823715b\" (UID: \"280a401b-5f4e-4e53-b170-11417823715b\") "
	Apr 10 21:34:49 addons-577364 kubelet[1285]: I0410 21:34:49.516058    1285 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/280a401b-5f4e-4e53-b170-11417823715b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "280a401b-5f4e-4e53-b170-11417823715b" (UID: "280a401b-5f4e-4e53-b170-11417823715b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Apr 10 21:34:49 addons-577364 kubelet[1285]: I0410 21:34:49.517244    1285 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/280a401b-5f4e-4e53-b170-11417823715b-kube-api-access-j4gzm" (OuterVolumeSpecName: "kube-api-access-j4gzm") pod "280a401b-5f4e-4e53-b170-11417823715b" (UID: "280a401b-5f4e-4e53-b170-11417823715b"). InnerVolumeSpecName "kube-api-access-j4gzm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 10 21:34:49 addons-577364 kubelet[1285]: I0410 21:34:49.614982    1285 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-j4gzm\" (UniqueName: \"kubernetes.io/projected/280a401b-5f4e-4e53-b170-11417823715b-kube-api-access-j4gzm\") on node \"addons-577364\" DevicePath \"\""
	Apr 10 21:34:49 addons-577364 kubelet[1285]: I0410 21:34:49.615015    1285 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/280a401b-5f4e-4e53-b170-11417823715b-webhook-cert\") on node \"addons-577364\" DevicePath \"\""
	Apr 10 21:34:50 addons-577364 kubelet[1285]: I0410 21:34:50.242664    1285 scope.go:117] "RemoveContainer" containerID="5b10b6fe649e9e1ef8d2bb331aae84a87dac5217b10b302152382651c3d60d91"
	Apr 10 21:34:50 addons-577364 kubelet[1285]: I0410 21:34:50.266316    1285 scope.go:117] "RemoveContainer" containerID="5b10b6fe649e9e1ef8d2bb331aae84a87dac5217b10b302152382651c3d60d91"
	Apr 10 21:34:50 addons-577364 kubelet[1285]: E0410 21:34:50.266913    1285 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b10b6fe649e9e1ef8d2bb331aae84a87dac5217b10b302152382651c3d60d91\": container with ID starting with 5b10b6fe649e9e1ef8d2bb331aae84a87dac5217b10b302152382651c3d60d91 not found: ID does not exist" containerID="5b10b6fe649e9e1ef8d2bb331aae84a87dac5217b10b302152382651c3d60d91"
	Apr 10 21:34:50 addons-577364 kubelet[1285]: I0410 21:34:50.266960    1285 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b10b6fe649e9e1ef8d2bb331aae84a87dac5217b10b302152382651c3d60d91"} err="failed to get container status \"5b10b6fe649e9e1ef8d2bb331aae84a87dac5217b10b302152382651c3d60d91\": rpc error: code = NotFound desc = could not find container \"5b10b6fe649e9e1ef8d2bb331aae84a87dac5217b10b302152382651c3d60d91\": container with ID starting with 5b10b6fe649e9e1ef8d2bb331aae84a87dac5217b10b302152382651c3d60d91 not found: ID does not exist"
	Apr 10 21:34:50 addons-577364 kubelet[1285]: I0410 21:34:50.518978    1285 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="280a401b-5f4e-4e53-b170-11417823715b" path="/var/lib/kubelet/pods/280a401b-5f4e-4e53-b170-11417823715b/volumes"
	
	
	==> storage-provisioner [72cde0b3ae0bccfb3f13573eba3ccc04f1dc87f130ba80cb6121feaac314f80f] <==
	I0410 21:30:33.703304       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0410 21:30:33.720967       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0410 21:30:33.721005       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0410 21:30:33.741605       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0410 21:30:33.741870       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-577364_fcd7405c-062a-4df9-81f6-8627218702ba!
	I0410 21:30:33.787381       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fdddcbf7-e866-4ae6-ac3e-b18c57cf9ef7", APIVersion:"v1", ResourceVersion:"645", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-577364_fcd7405c-062a-4df9-81f6-8627218702ba became leader
	I0410 21:30:33.842232       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-577364_fcd7405c-062a-4df9-81f6-8627218702ba!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-577364 -n addons-577364
helpers_test.go:261: (dbg) Run:  kubectl --context addons-577364 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (156.24s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.3s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-577364
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-577364: exit status 82 (2m0.490267821s)

                                                
                                                
-- stdout --
	* Stopping node "addons-577364"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-577364" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-577364
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-577364: exit status 11 (21.522378711s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.209:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-577364" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-577364
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-577364: exit status 11 (6.144264381s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.209:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-577364" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-577364
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-577364: exit status 11 (6.143522955s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.209:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-577364" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (364.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-150873 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-150873 -v=7 --alsologtostderr
E0410 21:49:37.956999   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-150873 -v=7 --alsologtostderr: exit status 82 (2m2.054404675s)

                                                
                                                
-- stdout --
	* Stopping node "ha-150873-m04"  ...
	* Stopping node "ha-150873-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0410 21:49:13.989702   28360 out.go:291] Setting OutFile to fd 1 ...
	I0410 21:49:13.990006   28360 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 21:49:13.990017   28360 out.go:304] Setting ErrFile to fd 2...
	I0410 21:49:13.990022   28360 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 21:49:13.990261   28360 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 21:49:13.990559   28360 out.go:298] Setting JSON to false
	I0410 21:49:13.990660   28360 mustload.go:65] Loading cluster: ha-150873
	I0410 21:49:13.991052   28360 config.go:182] Loaded profile config "ha-150873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 21:49:13.991161   28360 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/config.json ...
	I0410 21:49:13.991385   28360 mustload.go:65] Loading cluster: ha-150873
	I0410 21:49:13.991574   28360 config.go:182] Loaded profile config "ha-150873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 21:49:13.991616   28360 stop.go:39] StopHost: ha-150873-m04
	I0410 21:49:13.992040   28360 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:49:13.992093   28360 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:49:14.007184   28360 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35419
	I0410 21:49:14.007628   28360 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:49:14.008220   28360 main.go:141] libmachine: Using API Version  1
	I0410 21:49:14.008260   28360 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:49:14.008607   28360 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:49:14.011240   28360 out.go:177] * Stopping node "ha-150873-m04"  ...
	I0410 21:49:14.012526   28360 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0410 21:49:14.012555   28360 main.go:141] libmachine: (ha-150873-m04) Calling .DriverName
	I0410 21:49:14.012775   28360 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0410 21:49:14.012802   28360 main.go:141] libmachine: (ha-150873-m04) Calling .GetSSHHostname
	I0410 21:49:14.015776   28360 main.go:141] libmachine: (ha-150873-m04) DBG | domain ha-150873-m04 has defined MAC address 52:54:00:56:5f:bd in network mk-ha-150873
	I0410 21:49:14.016226   28360 main.go:141] libmachine: (ha-150873-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:5f:bd", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:47:37 +0000 UTC Type:0 Mac:52:54:00:56:5f:bd Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:ha-150873-m04 Clientid:01:52:54:00:56:5f:bd}
	I0410 21:49:14.016268   28360 main.go:141] libmachine: (ha-150873-m04) DBG | domain ha-150873-m04 has defined IP address 192.168.39.144 and MAC address 52:54:00:56:5f:bd in network mk-ha-150873
	I0410 21:49:14.016381   28360 main.go:141] libmachine: (ha-150873-m04) Calling .GetSSHPort
	I0410 21:49:14.016568   28360 main.go:141] libmachine: (ha-150873-m04) Calling .GetSSHKeyPath
	I0410 21:49:14.016717   28360 main.go:141] libmachine: (ha-150873-m04) Calling .GetSSHUsername
	I0410 21:49:14.016829   28360 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/ha-150873-m04/id_rsa Username:docker}
	I0410 21:49:14.104275   28360 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0410 21:49:14.158669   28360 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0410 21:49:14.213606   28360 main.go:141] libmachine: Stopping "ha-150873-m04"...
	I0410 21:49:14.213645   28360 main.go:141] libmachine: (ha-150873-m04) Calling .GetState
	I0410 21:49:14.215305   28360 main.go:141] libmachine: (ha-150873-m04) Calling .Stop
	I0410 21:49:14.219978   28360 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 0/120
	I0410 21:49:15.537333   28360 main.go:141] libmachine: (ha-150873-m04) Calling .GetState
	I0410 21:49:15.538733   28360 main.go:141] libmachine: Machine "ha-150873-m04" was stopped.
	I0410 21:49:15.538753   28360 stop.go:75] duration metric: took 1.526229544s to stop
	I0410 21:49:15.538784   28360 stop.go:39] StopHost: ha-150873-m03
	I0410 21:49:15.539129   28360 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:49:15.539172   28360 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:49:15.554500   28360 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35285
	I0410 21:49:15.554945   28360 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:49:15.555405   28360 main.go:141] libmachine: Using API Version  1
	I0410 21:49:15.555425   28360 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:49:15.555752   28360 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:49:15.557807   28360 out.go:177] * Stopping node "ha-150873-m03"  ...
	I0410 21:49:15.559052   28360 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0410 21:49:15.559073   28360 main.go:141] libmachine: (ha-150873-m03) Calling .DriverName
	I0410 21:49:15.559320   28360 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0410 21:49:15.559346   28360 main.go:141] libmachine: (ha-150873-m03) Calling .GetSSHHostname
	I0410 21:49:15.562660   28360 main.go:141] libmachine: (ha-150873-m03) DBG | domain ha-150873-m03 has defined MAC address 52:54:00:07:78:28 in network mk-ha-150873
	I0410 21:49:15.563087   28360 main.go:141] libmachine: (ha-150873-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:78:28", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:46:11 +0000 UTC Type:0 Mac:52:54:00:07:78:28 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:ha-150873-m03 Clientid:01:52:54:00:07:78:28}
	I0410 21:49:15.563117   28360 main.go:141] libmachine: (ha-150873-m03) DBG | domain ha-150873-m03 has defined IP address 192.168.39.143 and MAC address 52:54:00:07:78:28 in network mk-ha-150873
	I0410 21:49:15.563253   28360 main.go:141] libmachine: (ha-150873-m03) Calling .GetSSHPort
	I0410 21:49:15.563427   28360 main.go:141] libmachine: (ha-150873-m03) Calling .GetSSHKeyPath
	I0410 21:49:15.563653   28360 main.go:141] libmachine: (ha-150873-m03) Calling .GetSSHUsername
	I0410 21:49:15.563798   28360 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/ha-150873-m03/id_rsa Username:docker}
	I0410 21:49:15.654057   28360 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0410 21:49:15.710026   28360 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0410 21:49:15.766279   28360 main.go:141] libmachine: Stopping "ha-150873-m03"...
	I0410 21:49:15.766325   28360 main.go:141] libmachine: (ha-150873-m03) Calling .GetState
	I0410 21:49:15.767913   28360 main.go:141] libmachine: (ha-150873-m03) Calling .Stop
	I0410 21:49:15.771533   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 0/120
	I0410 21:49:16.773361   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 1/120
	I0410 21:49:17.775051   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 2/120
	I0410 21:49:18.776420   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 3/120
	I0410 21:49:19.778018   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 4/120
	I0410 21:49:20.779989   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 5/120
	I0410 21:49:21.781392   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 6/120
	I0410 21:49:22.783046   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 7/120
	I0410 21:49:23.784483   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 8/120
	I0410 21:49:24.786007   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 9/120
	I0410 21:49:25.788136   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 10/120
	I0410 21:49:26.789564   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 11/120
	I0410 21:49:27.791043   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 12/120
	I0410 21:49:28.792926   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 13/120
	I0410 21:49:29.794827   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 14/120
	I0410 21:49:30.796183   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 15/120
	I0410 21:49:31.797986   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 16/120
	I0410 21:49:32.799688   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 17/120
	I0410 21:49:33.801401   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 18/120
	I0410 21:49:34.802756   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 19/120
	I0410 21:49:35.804347   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 20/120
	I0410 21:49:36.805873   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 21/120
	I0410 21:49:37.807506   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 22/120
	I0410 21:49:38.809221   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 23/120
	I0410 21:49:39.811547   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 24/120
	I0410 21:49:40.813751   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 25/120
	I0410 21:49:41.815407   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 26/120
	I0410 21:49:42.816854   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 27/120
	I0410 21:49:43.818273   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 28/120
	I0410 21:49:44.819826   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 29/120
	I0410 21:49:45.821763   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 30/120
	I0410 21:49:46.823354   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 31/120
	I0410 21:49:47.824894   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 32/120
	I0410 21:49:48.826671   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 33/120
	I0410 21:49:49.828056   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 34/120
	I0410 21:49:50.829894   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 35/120
	I0410 21:49:51.831450   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 36/120
	I0410 21:49:52.833606   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 37/120
	I0410 21:49:53.835217   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 38/120
	I0410 21:49:54.836710   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 39/120
	I0410 21:49:55.838679   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 40/120
	I0410 21:49:56.840287   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 41/120
	I0410 21:49:57.841746   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 42/120
	I0410 21:49:58.843105   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 43/120
	I0410 21:49:59.844876   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 44/120
	I0410 21:50:00.846785   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 45/120
	I0410 21:50:01.848599   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 46/120
	I0410 21:50:02.851060   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 47/120
	I0410 21:50:03.852958   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 48/120
	I0410 21:50:04.854957   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 49/120
	I0410 21:50:05.857122   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 50/120
	I0410 21:50:06.858930   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 51/120
	I0410 21:50:07.860297   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 52/120
	I0410 21:50:08.861680   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 53/120
	I0410 21:50:09.863958   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 54/120
	I0410 21:50:10.865791   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 55/120
	I0410 21:50:11.867696   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 56/120
	I0410 21:50:12.869274   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 57/120
	I0410 21:50:13.871161   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 58/120
	I0410 21:50:14.872833   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 59/120
	I0410 21:50:15.874992   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 60/120
	I0410 21:50:16.876542   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 61/120
	I0410 21:50:17.878780   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 62/120
	I0410 21:50:18.880124   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 63/120
	I0410 21:50:19.881707   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 64/120
	I0410 21:50:20.883521   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 65/120
	I0410 21:50:21.885233   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 66/120
	I0410 21:50:22.887664   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 67/120
	I0410 21:50:23.889103   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 68/120
	I0410 21:50:24.890404   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 69/120
	I0410 21:50:25.891936   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 70/120
	I0410 21:50:26.893275   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 71/120
	I0410 21:50:27.894900   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 72/120
	I0410 21:50:28.896699   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 73/120
	I0410 21:50:29.898413   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 74/120
	I0410 21:50:30.900077   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 75/120
	I0410 21:50:31.901477   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 76/120
	I0410 21:50:32.903009   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 77/120
	I0410 21:50:33.904460   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 78/120
	I0410 21:50:34.906071   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 79/120
	I0410 21:50:35.908377   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 80/120
	I0410 21:50:36.910432   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 81/120
	I0410 21:50:37.912326   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 82/120
	I0410 21:50:38.914248   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 83/120
	I0410 21:50:39.915679   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 84/120
	I0410 21:50:40.917311   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 85/120
	I0410 21:50:41.919184   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 86/120
	I0410 21:50:42.920917   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 87/120
	I0410 21:50:43.922595   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 88/120
	I0410 21:50:44.925105   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 89/120
	I0410 21:50:45.926803   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 90/120
	I0410 21:50:46.928317   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 91/120
	I0410 21:50:47.929762   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 92/120
	I0410 21:50:48.932102   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 93/120
	I0410 21:50:49.933471   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 94/120
	I0410 21:50:50.935604   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 95/120
	I0410 21:50:51.937228   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 96/120
	I0410 21:50:52.938934   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 97/120
	I0410 21:50:53.941096   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 98/120
	I0410 21:50:54.942404   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 99/120
	I0410 21:50:55.943847   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 100/120
	I0410 21:50:56.945228   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 101/120
	I0410 21:50:57.946813   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 102/120
	I0410 21:50:58.948453   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 103/120
	I0410 21:50:59.950093   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 104/120
	I0410 21:51:00.952272   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 105/120
	I0410 21:51:01.953782   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 106/120
	I0410 21:51:02.955564   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 107/120
	I0410 21:51:03.957185   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 108/120
	I0410 21:51:04.958634   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 109/120
	I0410 21:51:05.960544   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 110/120
	I0410 21:51:06.961867   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 111/120
	I0410 21:51:07.962977   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 112/120
	I0410 21:51:08.964430   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 113/120
	I0410 21:51:09.965809   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 114/120
	I0410 21:51:10.967932   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 115/120
	I0410 21:51:11.969354   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 116/120
	I0410 21:51:12.970836   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 117/120
	I0410 21:51:13.972320   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 118/120
	I0410 21:51:14.973748   28360 main.go:141] libmachine: (ha-150873-m03) Waiting for machine to stop 119/120
	I0410 21:51:15.974681   28360 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0410 21:51:15.974769   28360 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0410 21:51:15.976933   28360 out.go:177] 
	W0410 21:51:15.978420   28360 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0410 21:51:15.978436   28360 out.go:239] * 
	* 
	W0410 21:51:15.980740   28360 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0410 21:51:15.982361   28360 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-150873 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-150873 --wait=true -v=7 --alsologtostderr
E0410 21:51:54.115196   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
E0410 21:51:59.610340   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
E0410 21:52:21.797487   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-150873 --wait=true -v=7 --alsologtostderr: (3m59.323175664s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-150873
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-150873 -n ha-150873
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-150873 logs -n 25: (2.087387238s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| cp      | ha-150873 cp ha-150873-m03:/home/docker/cp-test.txt                             | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | ha-150873-m02:/home/docker/cp-test_ha-150873-m03_ha-150873-m02.txt              |           |         |                |                     |                     |
	| ssh     | ha-150873 ssh -n                                                                | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | ha-150873-m03 sudo cat                                                          |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |                |                     |                     |
	| ssh     | ha-150873 ssh -n ha-150873-m02 sudo cat                                         | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | /home/docker/cp-test_ha-150873-m03_ha-150873-m02.txt                            |           |         |                |                     |                     |
	| cp      | ha-150873 cp ha-150873-m03:/home/docker/cp-test.txt                             | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | ha-150873-m04:/home/docker/cp-test_ha-150873-m03_ha-150873-m04.txt              |           |         |                |                     |                     |
	| ssh     | ha-150873 ssh -n                                                                | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | ha-150873-m03 sudo cat                                                          |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |                |                     |                     |
	| ssh     | ha-150873 ssh -n ha-150873-m04 sudo cat                                         | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | /home/docker/cp-test_ha-150873-m03_ha-150873-m04.txt                            |           |         |                |                     |                     |
	| cp      | ha-150873 cp testdata/cp-test.txt                                               | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | ha-150873-m04:/home/docker/cp-test.txt                                          |           |         |                |                     |                     |
	| ssh     | ha-150873 ssh -n                                                                | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | ha-150873-m04 sudo cat                                                          |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |                |                     |                     |
	| cp      | ha-150873 cp ha-150873-m04:/home/docker/cp-test.txt                             | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile947152864/001/cp-test_ha-150873-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-150873 ssh -n                                                                | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | ha-150873-m04 sudo cat                                                          |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |                |                     |                     |
	| cp      | ha-150873 cp ha-150873-m04:/home/docker/cp-test.txt                             | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | ha-150873:/home/docker/cp-test_ha-150873-m04_ha-150873.txt                      |           |         |                |                     |                     |
	| ssh     | ha-150873 ssh -n                                                                | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | ha-150873-m04 sudo cat                                                          |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |                |                     |                     |
	| ssh     | ha-150873 ssh -n ha-150873 sudo cat                                             | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | /home/docker/cp-test_ha-150873-m04_ha-150873.txt                                |           |         |                |                     |                     |
	| cp      | ha-150873 cp ha-150873-m04:/home/docker/cp-test.txt                             | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | ha-150873-m02:/home/docker/cp-test_ha-150873-m04_ha-150873-m02.txt              |           |         |                |                     |                     |
	| ssh     | ha-150873 ssh -n                                                                | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | ha-150873-m04 sudo cat                                                          |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |                |                     |                     |
	| ssh     | ha-150873 ssh -n ha-150873-m02 sudo cat                                         | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | /home/docker/cp-test_ha-150873-m04_ha-150873-m02.txt                            |           |         |                |                     |                     |
	| cp      | ha-150873 cp ha-150873-m04:/home/docker/cp-test.txt                             | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | ha-150873-m03:/home/docker/cp-test_ha-150873-m04_ha-150873-m03.txt              |           |         |                |                     |                     |
	| ssh     | ha-150873 ssh -n                                                                | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | ha-150873-m04 sudo cat                                                          |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |                |                     |                     |
	| ssh     | ha-150873 ssh -n ha-150873-m03 sudo cat                                         | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | /home/docker/cp-test_ha-150873-m04_ha-150873-m03.txt                            |           |         |                |                     |                     |
	| node    | ha-150873 node stop m02 -v=7                                                    | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | --alsologtostderr                                                               |           |         |                |                     |                     |
	| node    | ha-150873 node start m02 -v=7                                                   | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:49 UTC |
	|         | --alsologtostderr                                                               |           |         |                |                     |                     |
	| node    | list -p ha-150873 -v=7                                                          | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:49 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |                |                     |                     |
	| stop    | -p ha-150873 -v=7                                                               | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:49 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |                |                     |                     |
	| start   | -p ha-150873 --wait=true -v=7                                                   | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:51 UTC | 10 Apr 24 21:55 UTC |
	|         | --alsologtostderr                                                               |           |         |                |                     |                     |
	| node    | list -p ha-150873                                                               | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:55 UTC |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/10 21:51:16
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0410 21:51:16.039678   28838 out.go:291] Setting OutFile to fd 1 ...
	I0410 21:51:16.039937   28838 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 21:51:16.039947   28838 out.go:304] Setting ErrFile to fd 2...
	I0410 21:51:16.039952   28838 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 21:51:16.040142   28838 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 21:51:16.040733   28838 out.go:298] Setting JSON to false
	I0410 21:51:16.041631   28838 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2018,"bootTime":1712783858,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0410 21:51:16.041692   28838 start.go:139] virtualization: kvm guest
	I0410 21:51:16.044134   28838 out.go:177] * [ha-150873] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0410 21:51:16.046561   28838 out.go:177]   - MINIKUBE_LOCATION=18610
	I0410 21:51:16.046616   28838 notify.go:220] Checking for updates...
	I0410 21:51:16.049355   28838 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0410 21:51:16.050728   28838 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 21:51:16.052123   28838 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 21:51:16.053419   28838 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0410 21:51:16.054853   28838 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0410 21:51:16.056599   28838 config.go:182] Loaded profile config "ha-150873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 21:51:16.056683   28838 driver.go:392] Setting default libvirt URI to qemu:///system
	I0410 21:51:16.057148   28838 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:51:16.057191   28838 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:51:16.074305   28838 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36401
	I0410 21:51:16.074713   28838 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:51:16.075369   28838 main.go:141] libmachine: Using API Version  1
	I0410 21:51:16.075392   28838 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:51:16.075799   28838 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:51:16.076059   28838 main.go:141] libmachine: (ha-150873) Calling .DriverName
	I0410 21:51:16.113343   28838 out.go:177] * Using the kvm2 driver based on existing profile
	I0410 21:51:16.114922   28838 start.go:297] selected driver: kvm2
	I0410 21:51:16.114945   28838 start.go:901] validating driver "kvm2" against &{Name:ha-150873 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.29.3 ClusterName:ha-150873 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.213 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.143 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.144 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 21:51:16.115142   28838 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0410 21:51:16.115495   28838 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 21:51:16.115572   28838 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18610-5679/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0410 21:51:16.130178   28838 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0410 21:51:16.130843   28838 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 21:51:16.130924   28838 cni.go:84] Creating CNI manager for ""
	I0410 21:51:16.130939   28838 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0410 21:51:16.131002   28838 start.go:340] cluster config:
	{Name:ha-150873 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-150873 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.213 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.143 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.144 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 21:51:16.131148   28838 iso.go:125] acquiring lock: {Name:mk5a268a8a4dbf02629446a098c1de60574dce10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 21:51:16.133237   28838 out.go:177] * Starting "ha-150873" primary control-plane node in "ha-150873" cluster
	I0410 21:51:16.134680   28838 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 21:51:16.134723   28838 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0410 21:51:16.134730   28838 cache.go:56] Caching tarball of preloaded images
	I0410 21:51:16.134867   28838 preload.go:173] Found /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0410 21:51:16.134884   28838 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0410 21:51:16.135003   28838 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/config.json ...
	I0410 21:51:16.135241   28838 start.go:360] acquireMachinesLock for ha-150873: {Name:mkcfcfa8b01b63726303cd37d33f01d452118635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0410 21:51:16.135287   28838 start.go:364] duration metric: took 29.911µs to acquireMachinesLock for "ha-150873"
	I0410 21:51:16.135306   28838 start.go:96] Skipping create...Using existing machine configuration
	I0410 21:51:16.135317   28838 fix.go:54] fixHost starting: 
	I0410 21:51:16.135589   28838 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:51:16.135618   28838 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:51:16.150140   28838 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33911
	I0410 21:51:16.150628   28838 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:51:16.151233   28838 main.go:141] libmachine: Using API Version  1
	I0410 21:51:16.151259   28838 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:51:16.151584   28838 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:51:16.151798   28838 main.go:141] libmachine: (ha-150873) Calling .DriverName
	I0410 21:51:16.152014   28838 main.go:141] libmachine: (ha-150873) Calling .GetState
	I0410 21:51:16.153793   28838 fix.go:112] recreateIfNeeded on ha-150873: state=Running err=<nil>
	W0410 21:51:16.153814   28838 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 21:51:16.156157   28838 out.go:177] * Updating the running kvm2 "ha-150873" VM ...
	I0410 21:51:16.157559   28838 machine.go:94] provisionDockerMachine start ...
	I0410 21:51:16.157579   28838 main.go:141] libmachine: (ha-150873) Calling .DriverName
	I0410 21:51:16.157778   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHHostname
	I0410 21:51:16.160501   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:51:16.161299   28838 main.go:141] libmachine: (ha-150873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:93:6b", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:43:02 +0000 UTC Type:0 Mac:52:54:00:50:93:6b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-150873 Clientid:01:52:54:00:50:93:6b}
	I0410 21:51:16.161334   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined IP address 192.168.39.12 and MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:51:16.161452   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHPort
	I0410 21:51:16.161646   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHKeyPath
	I0410 21:51:16.161843   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHKeyPath
	I0410 21:51:16.162039   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHUsername
	I0410 21:51:16.162238   28838 main.go:141] libmachine: Using SSH client type: native
	I0410 21:51:16.162464   28838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0410 21:51:16.162486   28838 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 21:51:16.270248   28838 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-150873
	
	I0410 21:51:16.270284   28838 main.go:141] libmachine: (ha-150873) Calling .GetMachineName
	I0410 21:51:16.270567   28838 buildroot.go:166] provisioning hostname "ha-150873"
	I0410 21:51:16.270591   28838 main.go:141] libmachine: (ha-150873) Calling .GetMachineName
	I0410 21:51:16.270792   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHHostname
	I0410 21:51:16.273376   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:51:16.273728   28838 main.go:141] libmachine: (ha-150873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:93:6b", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:43:02 +0000 UTC Type:0 Mac:52:54:00:50:93:6b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-150873 Clientid:01:52:54:00:50:93:6b}
	I0410 21:51:16.273762   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined IP address 192.168.39.12 and MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:51:16.273912   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHPort
	I0410 21:51:16.274095   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHKeyPath
	I0410 21:51:16.274247   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHKeyPath
	I0410 21:51:16.274386   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHUsername
	I0410 21:51:16.274543   28838 main.go:141] libmachine: Using SSH client type: native
	I0410 21:51:16.274738   28838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0410 21:51:16.274751   28838 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-150873 && echo "ha-150873" | sudo tee /etc/hostname
	I0410 21:51:16.402765   28838 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-150873
	
	I0410 21:51:16.402807   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHHostname
	I0410 21:51:16.405799   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:51:16.406244   28838 main.go:141] libmachine: (ha-150873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:93:6b", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:43:02 +0000 UTC Type:0 Mac:52:54:00:50:93:6b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-150873 Clientid:01:52:54:00:50:93:6b}
	I0410 21:51:16.406278   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined IP address 192.168.39.12 and MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:51:16.406535   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHPort
	I0410 21:51:16.406717   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHKeyPath
	I0410 21:51:16.406876   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHKeyPath
	I0410 21:51:16.406989   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHUsername
	I0410 21:51:16.407112   28838 main.go:141] libmachine: Using SSH client type: native
	I0410 21:51:16.407263   28838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0410 21:51:16.407289   28838 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-150873' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-150873/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-150873' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 21:51:16.513016   28838 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 21:51:16.513041   28838 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 21:51:16.513078   28838 buildroot.go:174] setting up certificates
	I0410 21:51:16.513087   28838 provision.go:84] configureAuth start
	I0410 21:51:16.513098   28838 main.go:141] libmachine: (ha-150873) Calling .GetMachineName
	I0410 21:51:16.513349   28838 main.go:141] libmachine: (ha-150873) Calling .GetIP
	I0410 21:51:16.516046   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:51:16.516537   28838 main.go:141] libmachine: (ha-150873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:93:6b", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:43:02 +0000 UTC Type:0 Mac:52:54:00:50:93:6b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-150873 Clientid:01:52:54:00:50:93:6b}
	I0410 21:51:16.516567   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined IP address 192.168.39.12 and MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:51:16.516718   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHHostname
	I0410 21:51:16.519025   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:51:16.519424   28838 main.go:141] libmachine: (ha-150873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:93:6b", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:43:02 +0000 UTC Type:0 Mac:52:54:00:50:93:6b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-150873 Clientid:01:52:54:00:50:93:6b}
	I0410 21:51:16.519528   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined IP address 192.168.39.12 and MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:51:16.519620   28838 provision.go:143] copyHostCerts
	I0410 21:51:16.519644   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 21:51:16.519687   28838 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 21:51:16.519695   28838 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 21:51:16.519769   28838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 21:51:16.519836   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 21:51:16.519861   28838 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 21:51:16.519871   28838 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 21:51:16.519909   28838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 21:51:16.519972   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 21:51:16.519989   28838 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 21:51:16.519996   28838 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 21:51:16.520038   28838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 21:51:16.520101   28838 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.ha-150873 san=[127.0.0.1 192.168.39.12 ha-150873 localhost minikube]
	I0410 21:51:16.765692   28838 provision.go:177] copyRemoteCerts
	I0410 21:51:16.765756   28838 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 21:51:16.765784   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHHostname
	I0410 21:51:16.768437   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:51:16.768835   28838 main.go:141] libmachine: (ha-150873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:93:6b", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:43:02 +0000 UTC Type:0 Mac:52:54:00:50:93:6b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-150873 Clientid:01:52:54:00:50:93:6b}
	I0410 21:51:16.768865   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined IP address 192.168.39.12 and MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:51:16.769073   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHPort
	I0410 21:51:16.769275   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHKeyPath
	I0410 21:51:16.769451   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHUsername
	I0410 21:51:16.769574   28838 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/ha-150873/id_rsa Username:docker}
	I0410 21:51:16.854435   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0410 21:51:16.854510   28838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 21:51:16.888051   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0410 21:51:16.888121   28838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0410 21:51:16.915829   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0410 21:51:16.915902   28838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0410 21:51:16.950373   28838 provision.go:87] duration metric: took 437.27063ms to configureAuth
	I0410 21:51:16.950403   28838 buildroot.go:189] setting minikube options for container-runtime
	I0410 21:51:16.950680   28838 config.go:182] Loaded profile config "ha-150873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 21:51:16.950761   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHHostname
	I0410 21:51:16.953644   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:51:16.954038   28838 main.go:141] libmachine: (ha-150873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:93:6b", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:43:02 +0000 UTC Type:0 Mac:52:54:00:50:93:6b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-150873 Clientid:01:52:54:00:50:93:6b}
	I0410 21:51:16.954068   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined IP address 192.168.39.12 and MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:51:16.954363   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHPort
	I0410 21:51:16.954654   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHKeyPath
	I0410 21:51:16.954828   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHKeyPath
	I0410 21:51:16.954991   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHUsername
	I0410 21:51:16.955198   28838 main.go:141] libmachine: Using SSH client type: native
	I0410 21:51:16.955430   28838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0410 21:51:16.955459   28838 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 21:52:47.944047   28838 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 21:52:47.944110   28838 machine.go:97] duration metric: took 1m31.786528876s to provisionDockerMachine
	I0410 21:52:47.944155   28838 start.go:293] postStartSetup for "ha-150873" (driver="kvm2")
	I0410 21:52:47.944176   28838 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 21:52:47.944205   28838 main.go:141] libmachine: (ha-150873) Calling .DriverName
	I0410 21:52:47.944587   28838 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 21:52:47.944615   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHHostname
	I0410 21:52:47.948065   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:52:47.948579   28838 main.go:141] libmachine: (ha-150873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:93:6b", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:43:02 +0000 UTC Type:0 Mac:52:54:00:50:93:6b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-150873 Clientid:01:52:54:00:50:93:6b}
	I0410 21:52:47.948607   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined IP address 192.168.39.12 and MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:52:47.948766   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHPort
	I0410 21:52:47.948975   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHKeyPath
	I0410 21:52:47.949122   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHUsername
	I0410 21:52:47.949251   28838 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/ha-150873/id_rsa Username:docker}
	I0410 21:52:48.032250   28838 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 21:52:48.036615   28838 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 21:52:48.036642   28838 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 21:52:48.036714   28838 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 21:52:48.036831   28838 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 21:52:48.036849   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> /etc/ssl/certs/130012.pem
	I0410 21:52:48.036974   28838 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 21:52:48.047416   28838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 21:52:48.074242   28838 start.go:296] duration metric: took 130.066467ms for postStartSetup
	I0410 21:52:48.074281   28838 main.go:141] libmachine: (ha-150873) Calling .DriverName
	I0410 21:52:48.074564   28838 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0410 21:52:48.074598   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHHostname
	I0410 21:52:48.077254   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:52:48.077733   28838 main.go:141] libmachine: (ha-150873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:93:6b", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:43:02 +0000 UTC Type:0 Mac:52:54:00:50:93:6b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-150873 Clientid:01:52:54:00:50:93:6b}
	I0410 21:52:48.077763   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined IP address 192.168.39.12 and MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:52:48.077940   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHPort
	I0410 21:52:48.078152   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHKeyPath
	I0410 21:52:48.078324   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHUsername
	I0410 21:52:48.078515   28838 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/ha-150873/id_rsa Username:docker}
	W0410 21:52:48.159939   28838 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0410 21:52:48.159963   28838 fix.go:56] duration metric: took 1m32.024652278s for fixHost
	I0410 21:52:48.159983   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHHostname
	I0410 21:52:48.162936   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:52:48.163362   28838 main.go:141] libmachine: (ha-150873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:93:6b", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:43:02 +0000 UTC Type:0 Mac:52:54:00:50:93:6b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-150873 Clientid:01:52:54:00:50:93:6b}
	I0410 21:52:48.163389   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined IP address 192.168.39.12 and MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:52:48.163481   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHPort
	I0410 21:52:48.163696   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHKeyPath
	I0410 21:52:48.163906   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHKeyPath
	I0410 21:52:48.164076   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHUsername
	I0410 21:52:48.164234   28838 main.go:141] libmachine: Using SSH client type: native
	I0410 21:52:48.164458   28838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0410 21:52:48.164471   28838 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 21:52:48.265486   28838 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712785968.249510720
	
	I0410 21:52:48.265522   28838 fix.go:216] guest clock: 1712785968.249510720
	I0410 21:52:48.265528   28838 fix.go:229] Guest: 2024-04-10 21:52:48.24951072 +0000 UTC Remote: 2024-04-10 21:52:48.159970823 +0000 UTC m=+92.167300342 (delta=89.539897ms)
	I0410 21:52:48.265546   28838 fix.go:200] guest clock delta is within tolerance: 89.539897ms
	I0410 21:52:48.265552   28838 start.go:83] releasing machines lock for "ha-150873", held for 1m32.130254676s
	I0410 21:52:48.265579   28838 main.go:141] libmachine: (ha-150873) Calling .DriverName
	I0410 21:52:48.265826   28838 main.go:141] libmachine: (ha-150873) Calling .GetIP
	I0410 21:52:48.268824   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:52:48.269208   28838 main.go:141] libmachine: (ha-150873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:93:6b", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:43:02 +0000 UTC Type:0 Mac:52:54:00:50:93:6b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-150873 Clientid:01:52:54:00:50:93:6b}
	I0410 21:52:48.269240   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined IP address 192.168.39.12 and MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:52:48.269387   28838 main.go:141] libmachine: (ha-150873) Calling .DriverName
	I0410 21:52:48.269938   28838 main.go:141] libmachine: (ha-150873) Calling .DriverName
	I0410 21:52:48.270169   28838 main.go:141] libmachine: (ha-150873) Calling .DriverName
	I0410 21:52:48.270304   28838 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 21:52:48.270341   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHHostname
	I0410 21:52:48.270454   28838 ssh_runner.go:195] Run: cat /version.json
	I0410 21:52:48.270469   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHHostname
	I0410 21:52:48.273381   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:52:48.273772   28838 main.go:141] libmachine: (ha-150873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:93:6b", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:43:02 +0000 UTC Type:0 Mac:52:54:00:50:93:6b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-150873 Clientid:01:52:54:00:50:93:6b}
	I0410 21:52:48.273794   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined IP address 192.168.39.12 and MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:52:48.273829   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:52:48.273974   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHPort
	I0410 21:52:48.274153   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHKeyPath
	I0410 21:52:48.274288   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHUsername
	I0410 21:52:48.274295   28838 main.go:141] libmachine: (ha-150873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:93:6b", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:43:02 +0000 UTC Type:0 Mac:52:54:00:50:93:6b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-150873 Clientid:01:52:54:00:50:93:6b}
	I0410 21:52:48.274309   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined IP address 192.168.39.12 and MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:52:48.274402   28838 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/ha-150873/id_rsa Username:docker}
	I0410 21:52:48.274504   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHPort
	I0410 21:52:48.274659   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHKeyPath
	I0410 21:52:48.274845   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHUsername
	I0410 21:52:48.274990   28838 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/ha-150873/id_rsa Username:docker}
	I0410 21:52:48.382606   28838 ssh_runner.go:195] Run: systemctl --version
	I0410 21:52:48.391251   28838 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 21:52:48.563103   28838 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 21:52:48.578133   28838 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 21:52:48.578199   28838 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 21:52:48.589055   28838 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0410 21:52:48.589077   28838 start.go:494] detecting cgroup driver to use...
	I0410 21:52:48.589134   28838 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 21:52:48.609417   28838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 21:52:48.631411   28838 docker.go:217] disabling cri-docker service (if available) ...
	I0410 21:52:48.631492   28838 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 21:52:48.647700   28838 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 21:52:48.663053   28838 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 21:52:48.835357   28838 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 21:52:48.996820   28838 docker.go:233] disabling docker service ...
	I0410 21:52:48.996880   28838 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 21:52:49.014085   28838 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 21:52:49.028496   28838 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 21:52:49.183406   28838 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 21:52:49.336954   28838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 21:52:49.352961   28838 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 21:52:49.374425   28838 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0410 21:52:49.374488   28838 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 21:52:49.387583   28838 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 21:52:49.387647   28838 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 21:52:49.399510   28838 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 21:52:49.411803   28838 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 21:52:49.424001   28838 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 21:52:49.437031   28838 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 21:52:49.448658   28838 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 21:52:49.459945   28838 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 21:52:49.471588   28838 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 21:52:49.482315   28838 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 21:52:49.493328   28838 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 21:52:49.656363   28838 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 21:52:51.740583   28838 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.084176589s)
	I0410 21:52:51.740613   28838 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 21:52:51.740666   28838 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 21:52:51.747208   28838 start.go:562] Will wait 60s for crictl version
	I0410 21:52:51.747302   28838 ssh_runner.go:195] Run: which crictl
	I0410 21:52:51.751869   28838 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 21:52:51.793368   28838 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 21:52:51.793443   28838 ssh_runner.go:195] Run: crio --version
	I0410 21:52:51.826527   28838 ssh_runner.go:195] Run: crio --version
	I0410 21:52:51.861008   28838 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0410 21:52:51.863027   28838 main.go:141] libmachine: (ha-150873) Calling .GetIP
	I0410 21:52:51.866193   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:52:51.866581   28838 main.go:141] libmachine: (ha-150873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:93:6b", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:43:02 +0000 UTC Type:0 Mac:52:54:00:50:93:6b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-150873 Clientid:01:52:54:00:50:93:6b}
	I0410 21:52:51.866607   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined IP address 192.168.39.12 and MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:52:51.866842   28838 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0410 21:52:51.871871   28838 kubeadm.go:877] updating cluster {Name:ha-150873 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:ha-150873 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.213 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.143 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.144 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 21:52:51.871995   28838 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 21:52:51.872035   28838 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 21:52:51.919789   28838 crio.go:514] all images are preloaded for cri-o runtime.
	I0410 21:52:51.919815   28838 crio.go:433] Images already preloaded, skipping extraction
	I0410 21:52:51.919868   28838 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 21:52:51.963846   28838 crio.go:514] all images are preloaded for cri-o runtime.
	I0410 21:52:51.963868   28838 cache_images.go:84] Images are preloaded, skipping loading
	I0410 21:52:51.963876   28838 kubeadm.go:928] updating node { 192.168.39.12 8443 v1.29.3 crio true true} ...
	I0410 21:52:51.963962   28838 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-150873 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.12
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-150873 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 21:52:51.964020   28838 ssh_runner.go:195] Run: crio config
	I0410 21:52:52.022155   28838 cni.go:84] Creating CNI manager for ""
	I0410 21:52:52.022176   28838 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0410 21:52:52.022186   28838 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 21:52:52.022206   28838 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.12 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-150873 NodeName:ha-150873 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0410 21:52:52.022333   28838 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.12
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-150873"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.12
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.12"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 21:52:52.022360   28838 kube-vip.go:111] generating kube-vip config ...
	I0410 21:52:52.022398   28838 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0410 21:52:52.035262   28838 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0410 21:52:52.035388   28838 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0410 21:52:52.035439   28838 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0410 21:52:52.045809   28838 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 21:52:52.045878   28838 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0410 21:52:52.056106   28838 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0410 21:52:52.073761   28838 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0410 21:52:52.093533   28838 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0410 21:52:52.113639   28838 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0410 21:52:52.133451   28838 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0410 21:52:52.139130   28838 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 21:52:52.298795   28838 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 21:52:52.317013   28838 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873 for IP: 192.168.39.12
	I0410 21:52:52.317035   28838 certs.go:194] generating shared ca certs ...
	I0410 21:52:52.317049   28838 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 21:52:52.317207   28838 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 21:52:52.317268   28838 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 21:52:52.317288   28838 certs.go:256] generating profile certs ...
	I0410 21:52:52.317381   28838 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/client.key
	I0410 21:52:52.317411   28838 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/apiserver.key.434639d3
	I0410 21:52:52.317431   28838 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/apiserver.crt.434639d3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.12 192.168.39.213 192.168.39.143 192.168.39.254]
	I0410 21:52:52.708796   28838 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/apiserver.crt.434639d3 ...
	I0410 21:52:52.708829   28838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/apiserver.crt.434639d3: {Name:mk1501dc67fd7c8d8a733778ec51a67d98f8dd6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 21:52:52.709020   28838 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/apiserver.key.434639d3 ...
	I0410 21:52:52.709039   28838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/apiserver.key.434639d3: {Name:mk57920fc7ebe91730f5e8058b009a75614e19dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 21:52:52.709139   28838 certs.go:381] copying /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/apiserver.crt.434639d3 -> /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/apiserver.crt
	I0410 21:52:52.709302   28838 certs.go:385] copying /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/apiserver.key.434639d3 -> /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/apiserver.key
	I0410 21:52:52.709457   28838 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/proxy-client.key
	I0410 21:52:52.709474   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0410 21:52:52.709491   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0410 21:52:52.709507   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0410 21:52:52.709526   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0410 21:52:52.709542   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0410 21:52:52.709558   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0410 21:52:52.709574   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0410 21:52:52.709589   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0410 21:52:52.709688   28838 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 21:52:52.709768   28838 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 21:52:52.709783   28838 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 21:52:52.709818   28838 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 21:52:52.709851   28838 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 21:52:52.709883   28838 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 21:52:52.709943   28838 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 21:52:52.710000   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> /usr/share/ca-certificates/130012.pem
	I0410 21:52:52.710029   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0410 21:52:52.710045   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem -> /usr/share/ca-certificates/13001.pem
	I0410 21:52:52.710547   28838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 21:52:52.739805   28838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 21:52:52.767600   28838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 21:52:52.794794   28838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 21:52:52.822195   28838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0410 21:52:52.849264   28838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0410 21:52:52.876553   28838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 21:52:52.905449   28838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0410 21:52:52.935719   28838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 21:52:52.965741   28838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 21:52:52.993202   28838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 21:52:53.019601   28838 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 21:52:53.037482   28838 ssh_runner.go:195] Run: openssl version
	I0410 21:52:53.043362   28838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 21:52:53.054582   28838 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 21:52:53.059343   28838 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 21:52:53.059401   28838 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 21:52:53.065564   28838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 21:52:53.075485   28838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 21:52:53.087649   28838 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 21:52:53.092514   28838 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 21:52:53.092593   28838 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 21:52:53.098672   28838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 21:52:53.108680   28838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 21:52:53.120151   28838 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 21:52:53.125691   28838 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 21:52:53.125746   28838 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 21:52:53.131792   28838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 21:52:53.143410   28838 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 21:52:53.148422   28838 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0410 21:52:53.155040   28838 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0410 21:52:53.161092   28838 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0410 21:52:53.167091   28838 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0410 21:52:53.173144   28838 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0410 21:52:53.179262   28838 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0410 21:52:53.185349   28838 kubeadm.go:391] StartCluster: {Name:ha-150873 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-150873 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.213 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.143 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.144 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 21:52:53.185461   28838 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 21:52:53.185544   28838 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 21:52:53.230491   28838 cri.go:89] found id: "9eed089604ddb2adc879d9a9093f33fb8fdae41b062d71837f171fc366523b90"
	I0410 21:52:53.230515   28838 cri.go:89] found id: "1cfc2b9c051242d80ba7ad77f48c93b0c05cad4b86762bad6a1d854c33f6f32c"
	I0410 21:52:53.230523   28838 cri.go:89] found id: "c0533fe1ed46a2b0635aaf8d09515a53eff6c3f8d37327d0c287cabdb47062d2"
	I0410 21:52:53.230527   28838 cri.go:89] found id: "5565984567f9b26d4eed3577b07de6834f5ef76975cf4e514b712d250b43da66"
	I0410 21:52:53.230530   28838 cri.go:89] found id: "fb2a3cd16e18f44024f6ab2f1fbc983d58ea0b2f8dbeb32ab81ec676fc72e330"
	I0410 21:52:53.230534   28838 cri.go:89] found id: "a801aece5216f7e138337b799c1d603457c75338bc2d81915b8a2438f4c87070"
	I0410 21:52:53.230538   28838 cri.go:89] found id: "98119aea5e81af5a68cfed4eb015bf0f2b686e5a50dc607aca3240ee2f835f49"
	I0410 21:52:53.230541   28838 cri.go:89] found id: "e35fb1c2a3e4755b04eca6fabf4b21e19e1b19765a53119054c85ec43b017196"
	I0410 21:52:53.230546   28838 cri.go:89] found id: "9b735e1f5e9943f9daf11c84d8a1ecb16928f47d7abdcf35ccb712f504af9482"
	I0410 21:52:53.230553   28838 cri.go:89] found id: "538656d928f393189cf0534187fb39b2c64bb730a3116504548fdc465be1ea0a"
	I0410 21:52:53.230562   28838 cri.go:89] found id: ""
	I0410 21:52:53.230625   28838 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 10 21:55:16 ha-150873 crio[3171]: time="2024-04-10 21:55:16.115368774Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712786116115328093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=86075ff9-d8a3-4463-93d6-5c6c4ae6e17d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 21:55:16 ha-150873 crio[3171]: time="2024-04-10 21:55:16.116299819Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26c42fbb-f1b5-4caa-92a5-3aba792e4559 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:55:16 ha-150873 crio[3171]: time="2024-04-10 21:55:16.116410457Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26c42fbb-f1b5-4caa-92a5-3aba792e4559 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:55:16 ha-150873 crio[3171]: time="2024-04-10 21:55:16.117044856Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:967394af8ee84d12844f4c1fe58d3268b52d608806a9bbe4d030c8f4fab95b20,PodSandboxId:ae41a3393e770f705bc90b8c611d2256537c88bd0fcdbe29edddc1e1954f28e0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712786022459649353,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-twk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebfddcc6-a190-4756-9096-1dc2cec68cf7,},Annotations:map[string]string{io.kubernetes.container.hash: e810504,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edff49423a0137fe750956ba320c3555c41762c96e4b52d61dd538f1387f3e8b,PodSandboxId:7b338618978c4b0d279cda17d5926927fe117a89d2af2a813a35395c3791c96b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712786020508110954,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05169d4b9723d694fde443a3079da775,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5843098e8f58e76b5e87b452629022743500b92173820f27b05241c46737470a,PodSandboxId:c3b45aeeff5a45390600af338dbb400459f46162f7f23f5596ca6a802f9f9b33,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712786011924871993,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-npbvn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ff96a1f-d71f-41b7-89ec-8cb7a94c0231,},Annotations:map[string]string{io.kubernetes.container.hash: ec06d454,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e235f69edc18857fcd2070c996c68b599ab46f71b62c95fcc7e720038bca5907,PodSandboxId:ad883c5504ae7e3d33e6f82af471e250dc687313b278e854c8c290bea79247ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712786010353918322,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6eff29f33f6e236015d4efe6b97593c,},Annotations:map[string]string{io.kubernetes.container.hash: eb0f98cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:097ab2b7a3b21e861478a5265978920652458bdb04e361253d82c88339bbf66a,PodSandboxId:de11001c92427cdbff07fc29c19039b1af5709c1f71a07ffc554492a46b5fed4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712786000463841729,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ebab890b99987fdef4351dcb63a481c,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:11b8446660febd3894b8ae348d19cb08dc586be0b366fe960017799e3ef498b9,PodSandboxId:90789bebad3c1838a125d7bfe5747cc1f67b3713764f68c2f4516d186196541a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712785996449330227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 033c01dc-895b-4eca-87b9-e5a8444c4c62,},Annotations:map[string]string{io.kubernetes.container.hash: 3e4e98cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:3934f37403fb0beca32412cd8af38217c3eaabcbd92daf292e726a56c1e6a666,PodSandboxId:9a4ff4c8cdaeb05bf27351e4ebc587695641cff2231e8fa428d7abf83e07cc07,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712785979114651690,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4k6ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82bf47-319e-444b-bb54-9f44b684bf06,},Annotations:map[string]string{io.kubernetes.container.hash: 25e1f47f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:3f65a763bc9e27d1d1cb7df78aaa507490cf2c0ef14a25459071556e5237bd19,PodSandboxId:90789bebad3c1838a125d7bfe5747cc1f67b3713764f68c2f4516d186196541a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712785979187621109,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 033c01dc-895b-4eca-87b9-e5a8444c4c62,},Annotations:map[string]string{io.kubernetes.container.hash: 3e4e98cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d63235fb
69e81b7ed849622e86a6ab34f47f6d81af7dfbce078caf844c937923,PodSandboxId:ad883c5504ae7e3d33e6f82af471e250dc687313b278e854c8c290bea79247ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712785978670292567,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6eff29f33f6e236015d4efe6b97593c,},Annotations:map[string]string{io.kubernetes.container.hash: eb0f98cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74a5b5b09ea11da219a8235e9f05f
e927caf625ef95cdbf9ddb867aa7bcddce,PodSandboxId:1d197657a29aa7b4f583e81c8633fcbfe6303b83f088706399a4781170e698ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712785978953180708,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v7npj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a44fe0-14c0-451f-b707-d129c6cb30d4,},Annotations:map[string]string{io.kubernetes.container.hash: bb0000a3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6931407fcaa90fbe7cac83ed492d072d4c9bb966e765cea62da8ff26da536b59,PodSandboxId:db6b3dc77ab9f87a6afb143347c0940716ca8a70e5967378ce9620c03baf38b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712785978833280294,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lv7pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea5fd9f-ee96-4394-bca6-29cbcf5ad31e,},Annotations:map[string]string{io.kubernetes.container.hash: 143c6295,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a7bcf4449817eb8338b07a8e5efe49e28bcc08775cb33f9491a54950a0f3757,PodSandboxId:ae41a3393e770f705bc90b8c611d2256537c88bd0fcdbe29edddc1e1954f28e0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712785978602127581,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-twk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebfddcc6-a
190-4756-9096-1dc2cec68cf7,},Annotations:map[string]string{io.kubernetes.container.hash: e810504,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30486e357194bcf533da86b9e1d1529c00dac6b511afebe2045eb8d0b254e33d,PodSandboxId:7b338618978c4b0d279cda17d5926927fe117a89d2af2a813a35395c3791c96b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712785978526775957,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05
169d4b9723d694fde443a3079da775,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd121dc8b073b74475d28df919aa7d986e22e9916bf717630de3c193f121d3bf,PodSandboxId:1426294dc588a4879a72d63f686a321e11f0043f50b3700c7d985354bedfe919,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712785978512134053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ead7f643006d5b17c362a714ce1716,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 4fdb0fc7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290a7180b1a6d450d47f0ee5459e99f09e131f3c4f6ff26fbab860c8133ae13e,PodSandboxId:e25cccd588a353348042677451b278c282a7f154e0ca5139a21c1e8d4396439d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712785978086673766,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb34a5c53d32d72142b8d7c7bfda2302,},Annotations:map[string]string{io.kubern
etes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14633f0c6042a07eb877fb35742fc7f78eaf5fc02579011e3f22392bd4705149,PodSandboxId:caa71de8a90bd8f405aa1d2b15a22b877e9efabdda5d3ab5654d3c60100c6f2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712785636537427350,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-npbvn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ff96a1f-d71f-41b7-89ec-8cb7a94c0231,},Annotations:map[string]string{io.kuberne
tes.container.hash: ec06d454,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eed089604ddb2adc879d9a9093f33fb8fdae41b062d71837f171fc366523b90,PodSandboxId:477c4e121d289241b04e5bcba6621e3c962c07d0df1a2d85195741c8508989da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712785425699209159,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lv7pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea5fd9f-ee96-4394-bca6-29cbcf5ad31e,},Annotations:map[string]string{io.kubernetes.container.hash: 143c6295,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0533fe1ed46a2b0635aaf8d09515a53eff6c3f8d37327d0c287cabdb47062d2,PodSandboxId:7dcb837166455521932d4cb9f6dc4f1c30c3bbb463ace91c9b170b13eaa35891,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712785425572440741,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-76f75df574-v7npj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a44fe0-14c0-451f-b707-d129c6cb30d4,},Annotations:map[string]string{io.kubernetes.container.hash: bb0000a3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb2a3cd16e18f44024f6ab2f1fbc983d58ea0b2f8dbeb32ab81ec676fc72e330,PodSandboxId:215d1fe94079dd52ffc980ec77268193f9b6d373850752af5c7718762a5429df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0a
cea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712785423376402271,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4k6ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82bf47-319e-444b-bb54-9f44b684bf06,},Annotations:map[string]string{io.kubernetes.container.hash: 25e1f47f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35fb1c2a3e4755b04eca6fabf4b21e19e1b19765a53119054c85ec43b017196,PodSandboxId:503f0fd82969812f24a9d05afabc98c944f7c8c319b5dd485703c8293c6cc2de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5
a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712785403622940875,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb34a5c53d32d72142b8d7c7bfda2302,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b735e1f5e9943f9daf11c84d8a1ecb16928f47d7abdcf35ccb712f504af9482,PodSandboxId:3fad80ebf2adf1ef57f94d98afe692626c0000f4a7a16f2cc6934600c687c563,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1712785403608929394,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ead7f643006d5b17c362a714ce1716,},Annotations:map[string]string{io.kubernetes.container.hash: 4fdb0fc7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=26c42fbb-f1b5-4caa-92a5-3aba792e4559 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:55:16 ha-150873 crio[3171]: time="2024-04-10 21:55:16.165777954Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=58230431-7f4d-4bf1-8adf-059dee45e8e2 name=/runtime.v1.RuntimeService/Version
	Apr 10 21:55:16 ha-150873 crio[3171]: time="2024-04-10 21:55:16.165856357Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=58230431-7f4d-4bf1-8adf-059dee45e8e2 name=/runtime.v1.RuntimeService/Version
	Apr 10 21:55:16 ha-150873 crio[3171]: time="2024-04-10 21:55:16.166801549Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0584c344-a14d-4917-851c-868f8c4c24fe name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 21:55:16 ha-150873 crio[3171]: time="2024-04-10 21:55:16.167334694Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712786116167309466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0584c344-a14d-4917-851c-868f8c4c24fe name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 21:55:16 ha-150873 crio[3171]: time="2024-04-10 21:55:16.167781119Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e04be6fa-c1ce-4f2d-912e-76167bbb8f8c name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:55:16 ha-150873 crio[3171]: time="2024-04-10 21:55:16.167839399Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e04be6fa-c1ce-4f2d-912e-76167bbb8f8c name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:55:16 ha-150873 crio[3171]: time="2024-04-10 21:55:16.168520318Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:967394af8ee84d12844f4c1fe58d3268b52d608806a9bbe4d030c8f4fab95b20,PodSandboxId:ae41a3393e770f705bc90b8c611d2256537c88bd0fcdbe29edddc1e1954f28e0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712786022459649353,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-twk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebfddcc6-a190-4756-9096-1dc2cec68cf7,},Annotations:map[string]string{io.kubernetes.container.hash: e810504,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edff49423a0137fe750956ba320c3555c41762c96e4b52d61dd538f1387f3e8b,PodSandboxId:7b338618978c4b0d279cda17d5926927fe117a89d2af2a813a35395c3791c96b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712786020508110954,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05169d4b9723d694fde443a3079da775,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5843098e8f58e76b5e87b452629022743500b92173820f27b05241c46737470a,PodSandboxId:c3b45aeeff5a45390600af338dbb400459f46162f7f23f5596ca6a802f9f9b33,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712786011924871993,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-npbvn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ff96a1f-d71f-41b7-89ec-8cb7a94c0231,},Annotations:map[string]string{io.kubernetes.container.hash: ec06d454,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e235f69edc18857fcd2070c996c68b599ab46f71b62c95fcc7e720038bca5907,PodSandboxId:ad883c5504ae7e3d33e6f82af471e250dc687313b278e854c8c290bea79247ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712786010353918322,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6eff29f33f6e236015d4efe6b97593c,},Annotations:map[string]string{io.kubernetes.container.hash: eb0f98cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:097ab2b7a3b21e861478a5265978920652458bdb04e361253d82c88339bbf66a,PodSandboxId:de11001c92427cdbff07fc29c19039b1af5709c1f71a07ffc554492a46b5fed4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712786000463841729,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ebab890b99987fdef4351dcb63a481c,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:11b8446660febd3894b8ae348d19cb08dc586be0b366fe960017799e3ef498b9,PodSandboxId:90789bebad3c1838a125d7bfe5747cc1f67b3713764f68c2f4516d186196541a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712785996449330227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 033c01dc-895b-4eca-87b9-e5a8444c4c62,},Annotations:map[string]string{io.kubernetes.container.hash: 3e4e98cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:3934f37403fb0beca32412cd8af38217c3eaabcbd92daf292e726a56c1e6a666,PodSandboxId:9a4ff4c8cdaeb05bf27351e4ebc587695641cff2231e8fa428d7abf83e07cc07,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712785979114651690,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4k6ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82bf47-319e-444b-bb54-9f44b684bf06,},Annotations:map[string]string{io.kubernetes.container.hash: 25e1f47f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:3f65a763bc9e27d1d1cb7df78aaa507490cf2c0ef14a25459071556e5237bd19,PodSandboxId:90789bebad3c1838a125d7bfe5747cc1f67b3713764f68c2f4516d186196541a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712785979187621109,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 033c01dc-895b-4eca-87b9-e5a8444c4c62,},Annotations:map[string]string{io.kubernetes.container.hash: 3e4e98cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d63235fb
69e81b7ed849622e86a6ab34f47f6d81af7dfbce078caf844c937923,PodSandboxId:ad883c5504ae7e3d33e6f82af471e250dc687313b278e854c8c290bea79247ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712785978670292567,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6eff29f33f6e236015d4efe6b97593c,},Annotations:map[string]string{io.kubernetes.container.hash: eb0f98cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74a5b5b09ea11da219a8235e9f05f
e927caf625ef95cdbf9ddb867aa7bcddce,PodSandboxId:1d197657a29aa7b4f583e81c8633fcbfe6303b83f088706399a4781170e698ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712785978953180708,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v7npj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a44fe0-14c0-451f-b707-d129c6cb30d4,},Annotations:map[string]string{io.kubernetes.container.hash: bb0000a3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6931407fcaa90fbe7cac83ed492d072d4c9bb966e765cea62da8ff26da536b59,PodSandboxId:db6b3dc77ab9f87a6afb143347c0940716ca8a70e5967378ce9620c03baf38b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712785978833280294,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lv7pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea5fd9f-ee96-4394-bca6-29cbcf5ad31e,},Annotations:map[string]string{io.kubernetes.container.hash: 143c6295,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a7bcf4449817eb8338b07a8e5efe49e28bcc08775cb33f9491a54950a0f3757,PodSandboxId:ae41a3393e770f705bc90b8c611d2256537c88bd0fcdbe29edddc1e1954f28e0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712785978602127581,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-twk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebfddcc6-a
190-4756-9096-1dc2cec68cf7,},Annotations:map[string]string{io.kubernetes.container.hash: e810504,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30486e357194bcf533da86b9e1d1529c00dac6b511afebe2045eb8d0b254e33d,PodSandboxId:7b338618978c4b0d279cda17d5926927fe117a89d2af2a813a35395c3791c96b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712785978526775957,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05
169d4b9723d694fde443a3079da775,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd121dc8b073b74475d28df919aa7d986e22e9916bf717630de3c193f121d3bf,PodSandboxId:1426294dc588a4879a72d63f686a321e11f0043f50b3700c7d985354bedfe919,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712785978512134053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ead7f643006d5b17c362a714ce1716,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 4fdb0fc7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290a7180b1a6d450d47f0ee5459e99f09e131f3c4f6ff26fbab860c8133ae13e,PodSandboxId:e25cccd588a353348042677451b278c282a7f154e0ca5139a21c1e8d4396439d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712785978086673766,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb34a5c53d32d72142b8d7c7bfda2302,},Annotations:map[string]string{io.kubern
etes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14633f0c6042a07eb877fb35742fc7f78eaf5fc02579011e3f22392bd4705149,PodSandboxId:caa71de8a90bd8f405aa1d2b15a22b877e9efabdda5d3ab5654d3c60100c6f2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712785636537427350,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-npbvn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ff96a1f-d71f-41b7-89ec-8cb7a94c0231,},Annotations:map[string]string{io.kuberne
tes.container.hash: ec06d454,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eed089604ddb2adc879d9a9093f33fb8fdae41b062d71837f171fc366523b90,PodSandboxId:477c4e121d289241b04e5bcba6621e3c962c07d0df1a2d85195741c8508989da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712785425699209159,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lv7pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea5fd9f-ee96-4394-bca6-29cbcf5ad31e,},Annotations:map[string]string{io.kubernetes.container.hash: 143c6295,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0533fe1ed46a2b0635aaf8d09515a53eff6c3f8d37327d0c287cabdb47062d2,PodSandboxId:7dcb837166455521932d4cb9f6dc4f1c30c3bbb463ace91c9b170b13eaa35891,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712785425572440741,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-76f75df574-v7npj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a44fe0-14c0-451f-b707-d129c6cb30d4,},Annotations:map[string]string{io.kubernetes.container.hash: bb0000a3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb2a3cd16e18f44024f6ab2f1fbc983d58ea0b2f8dbeb32ab81ec676fc72e330,PodSandboxId:215d1fe94079dd52ffc980ec77268193f9b6d373850752af5c7718762a5429df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0a
cea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712785423376402271,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4k6ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82bf47-319e-444b-bb54-9f44b684bf06,},Annotations:map[string]string{io.kubernetes.container.hash: 25e1f47f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35fb1c2a3e4755b04eca6fabf4b21e19e1b19765a53119054c85ec43b017196,PodSandboxId:503f0fd82969812f24a9d05afabc98c944f7c8c319b5dd485703c8293c6cc2de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5
a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712785403622940875,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb34a5c53d32d72142b8d7c7bfda2302,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b735e1f5e9943f9daf11c84d8a1ecb16928f47d7abdcf35ccb712f504af9482,PodSandboxId:3fad80ebf2adf1ef57f94d98afe692626c0000f4a7a16f2cc6934600c687c563,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1712785403608929394,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ead7f643006d5b17c362a714ce1716,},Annotations:map[string]string{io.kubernetes.container.hash: 4fdb0fc7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e04be6fa-c1ce-4f2d-912e-76167bbb8f8c name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:55:16 ha-150873 crio[3171]: time="2024-04-10 21:55:16.224568133Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8ae3e88b-4afd-44f6-865a-e3c2f4a8f44a name=/runtime.v1.RuntimeService/Version
	Apr 10 21:55:16 ha-150873 crio[3171]: time="2024-04-10 21:55:16.224709391Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8ae3e88b-4afd-44f6-865a-e3c2f4a8f44a name=/runtime.v1.RuntimeService/Version
	Apr 10 21:55:16 ha-150873 crio[3171]: time="2024-04-10 21:55:16.226277253Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=685d9712-333e-469b-8f90-b9c7eb1678dc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 21:55:16 ha-150873 crio[3171]: time="2024-04-10 21:55:16.226908333Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712786116226868515,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=685d9712-333e-469b-8f90-b9c7eb1678dc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 21:55:16 ha-150873 crio[3171]: time="2024-04-10 21:55:16.227818348Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bb32fa53-db32-4fbb-b530-4642f300a7f8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:55:16 ha-150873 crio[3171]: time="2024-04-10 21:55:16.227958937Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bb32fa53-db32-4fbb-b530-4642f300a7f8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:55:16 ha-150873 crio[3171]: time="2024-04-10 21:55:16.228394806Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:967394af8ee84d12844f4c1fe58d3268b52d608806a9bbe4d030c8f4fab95b20,PodSandboxId:ae41a3393e770f705bc90b8c611d2256537c88bd0fcdbe29edddc1e1954f28e0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712786022459649353,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-twk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebfddcc6-a190-4756-9096-1dc2cec68cf7,},Annotations:map[string]string{io.kubernetes.container.hash: e810504,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edff49423a0137fe750956ba320c3555c41762c96e4b52d61dd538f1387f3e8b,PodSandboxId:7b338618978c4b0d279cda17d5926927fe117a89d2af2a813a35395c3791c96b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712786020508110954,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05169d4b9723d694fde443a3079da775,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5843098e8f58e76b5e87b452629022743500b92173820f27b05241c46737470a,PodSandboxId:c3b45aeeff5a45390600af338dbb400459f46162f7f23f5596ca6a802f9f9b33,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712786011924871993,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-npbvn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ff96a1f-d71f-41b7-89ec-8cb7a94c0231,},Annotations:map[string]string{io.kubernetes.container.hash: ec06d454,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e235f69edc18857fcd2070c996c68b599ab46f71b62c95fcc7e720038bca5907,PodSandboxId:ad883c5504ae7e3d33e6f82af471e250dc687313b278e854c8c290bea79247ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712786010353918322,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6eff29f33f6e236015d4efe6b97593c,},Annotations:map[string]string{io.kubernetes.container.hash: eb0f98cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:097ab2b7a3b21e861478a5265978920652458bdb04e361253d82c88339bbf66a,PodSandboxId:de11001c92427cdbff07fc29c19039b1af5709c1f71a07ffc554492a46b5fed4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712786000463841729,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ebab890b99987fdef4351dcb63a481c,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:11b8446660febd3894b8ae348d19cb08dc586be0b366fe960017799e3ef498b9,PodSandboxId:90789bebad3c1838a125d7bfe5747cc1f67b3713764f68c2f4516d186196541a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712785996449330227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 033c01dc-895b-4eca-87b9-e5a8444c4c62,},Annotations:map[string]string{io.kubernetes.container.hash: 3e4e98cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:3934f37403fb0beca32412cd8af38217c3eaabcbd92daf292e726a56c1e6a666,PodSandboxId:9a4ff4c8cdaeb05bf27351e4ebc587695641cff2231e8fa428d7abf83e07cc07,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712785979114651690,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4k6ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82bf47-319e-444b-bb54-9f44b684bf06,},Annotations:map[string]string{io.kubernetes.container.hash: 25e1f47f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:3f65a763bc9e27d1d1cb7df78aaa507490cf2c0ef14a25459071556e5237bd19,PodSandboxId:90789bebad3c1838a125d7bfe5747cc1f67b3713764f68c2f4516d186196541a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712785979187621109,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 033c01dc-895b-4eca-87b9-e5a8444c4c62,},Annotations:map[string]string{io.kubernetes.container.hash: 3e4e98cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d63235fb
69e81b7ed849622e86a6ab34f47f6d81af7dfbce078caf844c937923,PodSandboxId:ad883c5504ae7e3d33e6f82af471e250dc687313b278e854c8c290bea79247ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712785978670292567,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6eff29f33f6e236015d4efe6b97593c,},Annotations:map[string]string{io.kubernetes.container.hash: eb0f98cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74a5b5b09ea11da219a8235e9f05f
e927caf625ef95cdbf9ddb867aa7bcddce,PodSandboxId:1d197657a29aa7b4f583e81c8633fcbfe6303b83f088706399a4781170e698ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712785978953180708,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v7npj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a44fe0-14c0-451f-b707-d129c6cb30d4,},Annotations:map[string]string{io.kubernetes.container.hash: bb0000a3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6931407fcaa90fbe7cac83ed492d072d4c9bb966e765cea62da8ff26da536b59,PodSandboxId:db6b3dc77ab9f87a6afb143347c0940716ca8a70e5967378ce9620c03baf38b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712785978833280294,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lv7pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea5fd9f-ee96-4394-bca6-29cbcf5ad31e,},Annotations:map[string]string{io.kubernetes.container.hash: 143c6295,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a7bcf4449817eb8338b07a8e5efe49e28bcc08775cb33f9491a54950a0f3757,PodSandboxId:ae41a3393e770f705bc90b8c611d2256537c88bd0fcdbe29edddc1e1954f28e0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712785978602127581,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-twk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebfddcc6-a
190-4756-9096-1dc2cec68cf7,},Annotations:map[string]string{io.kubernetes.container.hash: e810504,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30486e357194bcf533da86b9e1d1529c00dac6b511afebe2045eb8d0b254e33d,PodSandboxId:7b338618978c4b0d279cda17d5926927fe117a89d2af2a813a35395c3791c96b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712785978526775957,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05
169d4b9723d694fde443a3079da775,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd121dc8b073b74475d28df919aa7d986e22e9916bf717630de3c193f121d3bf,PodSandboxId:1426294dc588a4879a72d63f686a321e11f0043f50b3700c7d985354bedfe919,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712785978512134053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ead7f643006d5b17c362a714ce1716,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 4fdb0fc7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290a7180b1a6d450d47f0ee5459e99f09e131f3c4f6ff26fbab860c8133ae13e,PodSandboxId:e25cccd588a353348042677451b278c282a7f154e0ca5139a21c1e8d4396439d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712785978086673766,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb34a5c53d32d72142b8d7c7bfda2302,},Annotations:map[string]string{io.kubern
etes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14633f0c6042a07eb877fb35742fc7f78eaf5fc02579011e3f22392bd4705149,PodSandboxId:caa71de8a90bd8f405aa1d2b15a22b877e9efabdda5d3ab5654d3c60100c6f2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712785636537427350,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-npbvn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ff96a1f-d71f-41b7-89ec-8cb7a94c0231,},Annotations:map[string]string{io.kuberne
tes.container.hash: ec06d454,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eed089604ddb2adc879d9a9093f33fb8fdae41b062d71837f171fc366523b90,PodSandboxId:477c4e121d289241b04e5bcba6621e3c962c07d0df1a2d85195741c8508989da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712785425699209159,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lv7pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea5fd9f-ee96-4394-bca6-29cbcf5ad31e,},Annotations:map[string]string{io.kubernetes.container.hash: 143c6295,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0533fe1ed46a2b0635aaf8d09515a53eff6c3f8d37327d0c287cabdb47062d2,PodSandboxId:7dcb837166455521932d4cb9f6dc4f1c30c3bbb463ace91c9b170b13eaa35891,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712785425572440741,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-76f75df574-v7npj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a44fe0-14c0-451f-b707-d129c6cb30d4,},Annotations:map[string]string{io.kubernetes.container.hash: bb0000a3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb2a3cd16e18f44024f6ab2f1fbc983d58ea0b2f8dbeb32ab81ec676fc72e330,PodSandboxId:215d1fe94079dd52ffc980ec77268193f9b6d373850752af5c7718762a5429df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0a
cea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712785423376402271,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4k6ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82bf47-319e-444b-bb54-9f44b684bf06,},Annotations:map[string]string{io.kubernetes.container.hash: 25e1f47f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35fb1c2a3e4755b04eca6fabf4b21e19e1b19765a53119054c85ec43b017196,PodSandboxId:503f0fd82969812f24a9d05afabc98c944f7c8c319b5dd485703c8293c6cc2de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5
a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712785403622940875,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb34a5c53d32d72142b8d7c7bfda2302,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b735e1f5e9943f9daf11c84d8a1ecb16928f47d7abdcf35ccb712f504af9482,PodSandboxId:3fad80ebf2adf1ef57f94d98afe692626c0000f4a7a16f2cc6934600c687c563,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1712785403608929394,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ead7f643006d5b17c362a714ce1716,},Annotations:map[string]string{io.kubernetes.container.hash: 4fdb0fc7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bb32fa53-db32-4fbb-b530-4642f300a7f8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:55:16 ha-150873 crio[3171]: time="2024-04-10 21:55:16.292172873Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=24a5fda2-2f4a-47a8-a223-987fab783190 name=/runtime.v1.RuntimeService/Version
	Apr 10 21:55:16 ha-150873 crio[3171]: time="2024-04-10 21:55:16.292270457Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=24a5fda2-2f4a-47a8-a223-987fab783190 name=/runtime.v1.RuntimeService/Version
	Apr 10 21:55:16 ha-150873 crio[3171]: time="2024-04-10 21:55:16.293545639Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e27827db-eb88-4282-9293-abe566558245 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 21:55:16 ha-150873 crio[3171]: time="2024-04-10 21:55:16.294249782Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712786116294211131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e27827db-eb88-4282-9293-abe566558245 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 21:55:16 ha-150873 crio[3171]: time="2024-04-10 21:55:16.294804598Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=818b16fa-1d0f-4007-bef5-fd5781794ac8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:55:16 ha-150873 crio[3171]: time="2024-04-10 21:55:16.294894696Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=818b16fa-1d0f-4007-bef5-fd5781794ac8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:55:16 ha-150873 crio[3171]: time="2024-04-10 21:55:16.295523749Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:967394af8ee84d12844f4c1fe58d3268b52d608806a9bbe4d030c8f4fab95b20,PodSandboxId:ae41a3393e770f705bc90b8c611d2256537c88bd0fcdbe29edddc1e1954f28e0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712786022459649353,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-twk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebfddcc6-a190-4756-9096-1dc2cec68cf7,},Annotations:map[string]string{io.kubernetes.container.hash: e810504,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edff49423a0137fe750956ba320c3555c41762c96e4b52d61dd538f1387f3e8b,PodSandboxId:7b338618978c4b0d279cda17d5926927fe117a89d2af2a813a35395c3791c96b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712786020508110954,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05169d4b9723d694fde443a3079da775,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5843098e8f58e76b5e87b452629022743500b92173820f27b05241c46737470a,PodSandboxId:c3b45aeeff5a45390600af338dbb400459f46162f7f23f5596ca6a802f9f9b33,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712786011924871993,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-npbvn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ff96a1f-d71f-41b7-89ec-8cb7a94c0231,},Annotations:map[string]string{io.kubernetes.container.hash: ec06d454,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e235f69edc18857fcd2070c996c68b599ab46f71b62c95fcc7e720038bca5907,PodSandboxId:ad883c5504ae7e3d33e6f82af471e250dc687313b278e854c8c290bea79247ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712786010353918322,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6eff29f33f6e236015d4efe6b97593c,},Annotations:map[string]string{io.kubernetes.container.hash: eb0f98cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:097ab2b7a3b21e861478a5265978920652458bdb04e361253d82c88339bbf66a,PodSandboxId:de11001c92427cdbff07fc29c19039b1af5709c1f71a07ffc554492a46b5fed4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712786000463841729,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ebab890b99987fdef4351dcb63a481c,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:11b8446660febd3894b8ae348d19cb08dc586be0b366fe960017799e3ef498b9,PodSandboxId:90789bebad3c1838a125d7bfe5747cc1f67b3713764f68c2f4516d186196541a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712785996449330227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 033c01dc-895b-4eca-87b9-e5a8444c4c62,},Annotations:map[string]string{io.kubernetes.container.hash: 3e4e98cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:3934f37403fb0beca32412cd8af38217c3eaabcbd92daf292e726a56c1e6a666,PodSandboxId:9a4ff4c8cdaeb05bf27351e4ebc587695641cff2231e8fa428d7abf83e07cc07,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712785979114651690,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4k6ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82bf47-319e-444b-bb54-9f44b684bf06,},Annotations:map[string]string{io.kubernetes.container.hash: 25e1f47f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:3f65a763bc9e27d1d1cb7df78aaa507490cf2c0ef14a25459071556e5237bd19,PodSandboxId:90789bebad3c1838a125d7bfe5747cc1f67b3713764f68c2f4516d186196541a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712785979187621109,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 033c01dc-895b-4eca-87b9-e5a8444c4c62,},Annotations:map[string]string{io.kubernetes.container.hash: 3e4e98cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d63235fb
69e81b7ed849622e86a6ab34f47f6d81af7dfbce078caf844c937923,PodSandboxId:ad883c5504ae7e3d33e6f82af471e250dc687313b278e854c8c290bea79247ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712785978670292567,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6eff29f33f6e236015d4efe6b97593c,},Annotations:map[string]string{io.kubernetes.container.hash: eb0f98cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74a5b5b09ea11da219a8235e9f05f
e927caf625ef95cdbf9ddb867aa7bcddce,PodSandboxId:1d197657a29aa7b4f583e81c8633fcbfe6303b83f088706399a4781170e698ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712785978953180708,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v7npj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a44fe0-14c0-451f-b707-d129c6cb30d4,},Annotations:map[string]string{io.kubernetes.container.hash: bb0000a3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6931407fcaa90fbe7cac83ed492d072d4c9bb966e765cea62da8ff26da536b59,PodSandboxId:db6b3dc77ab9f87a6afb143347c0940716ca8a70e5967378ce9620c03baf38b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712785978833280294,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lv7pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea5fd9f-ee96-4394-bca6-29cbcf5ad31e,},Annotations:map[string]string{io.kubernetes.container.hash: 143c6295,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a7bcf4449817eb8338b07a8e5efe49e28bcc08775cb33f9491a54950a0f3757,PodSandboxId:ae41a3393e770f705bc90b8c611d2256537c88bd0fcdbe29edddc1e1954f28e0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712785978602127581,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-twk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebfddcc6-a
190-4756-9096-1dc2cec68cf7,},Annotations:map[string]string{io.kubernetes.container.hash: e810504,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30486e357194bcf533da86b9e1d1529c00dac6b511afebe2045eb8d0b254e33d,PodSandboxId:7b338618978c4b0d279cda17d5926927fe117a89d2af2a813a35395c3791c96b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712785978526775957,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05
169d4b9723d694fde443a3079da775,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd121dc8b073b74475d28df919aa7d986e22e9916bf717630de3c193f121d3bf,PodSandboxId:1426294dc588a4879a72d63f686a321e11f0043f50b3700c7d985354bedfe919,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712785978512134053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ead7f643006d5b17c362a714ce1716,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 4fdb0fc7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290a7180b1a6d450d47f0ee5459e99f09e131f3c4f6ff26fbab860c8133ae13e,PodSandboxId:e25cccd588a353348042677451b278c282a7f154e0ca5139a21c1e8d4396439d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712785978086673766,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb34a5c53d32d72142b8d7c7bfda2302,},Annotations:map[string]string{io.kubern
etes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14633f0c6042a07eb877fb35742fc7f78eaf5fc02579011e3f22392bd4705149,PodSandboxId:caa71de8a90bd8f405aa1d2b15a22b877e9efabdda5d3ab5654d3c60100c6f2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712785636537427350,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-npbvn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ff96a1f-d71f-41b7-89ec-8cb7a94c0231,},Annotations:map[string]string{io.kuberne
tes.container.hash: ec06d454,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eed089604ddb2adc879d9a9093f33fb8fdae41b062d71837f171fc366523b90,PodSandboxId:477c4e121d289241b04e5bcba6621e3c962c07d0df1a2d85195741c8508989da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712785425699209159,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lv7pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea5fd9f-ee96-4394-bca6-29cbcf5ad31e,},Annotations:map[string]string{io.kubernetes.container.hash: 143c6295,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0533fe1ed46a2b0635aaf8d09515a53eff6c3f8d37327d0c287cabdb47062d2,PodSandboxId:7dcb837166455521932d4cb9f6dc4f1c30c3bbb463ace91c9b170b13eaa35891,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712785425572440741,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-76f75df574-v7npj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a44fe0-14c0-451f-b707-d129c6cb30d4,},Annotations:map[string]string{io.kubernetes.container.hash: bb0000a3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb2a3cd16e18f44024f6ab2f1fbc983d58ea0b2f8dbeb32ab81ec676fc72e330,PodSandboxId:215d1fe94079dd52ffc980ec77268193f9b6d373850752af5c7718762a5429df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0a
cea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712785423376402271,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4k6ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82bf47-319e-444b-bb54-9f44b684bf06,},Annotations:map[string]string{io.kubernetes.container.hash: 25e1f47f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35fb1c2a3e4755b04eca6fabf4b21e19e1b19765a53119054c85ec43b017196,PodSandboxId:503f0fd82969812f24a9d05afabc98c944f7c8c319b5dd485703c8293c6cc2de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5
a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712785403622940875,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb34a5c53d32d72142b8d7c7bfda2302,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b735e1f5e9943f9daf11c84d8a1ecb16928f47d7abdcf35ccb712f504af9482,PodSandboxId:3fad80ebf2adf1ef57f94d98afe692626c0000f4a7a16f2cc6934600c687c563,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1712785403608929394,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ead7f643006d5b17c362a714ce1716,},Annotations:map[string]string{io.kubernetes.container.hash: 4fdb0fc7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=818b16fa-1d0f-4007-bef5-fd5781794ac8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	967394af8ee84       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               2                   ae41a3393e770       kindnet-twk5c
	edff49423a013       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      About a minute ago   Running             kube-controller-manager   2                   7b338618978c4       kube-controller-manager-ha-150873
	5843098e8f58e       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   c3b45aeeff5a4       busybox-7fdf7869d9-npbvn
	e235f69edc188       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      About a minute ago   Running             kube-apiserver            2                   ad883c5504ae7       kube-apiserver-ha-150873
	097ab2b7a3b21       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      About a minute ago   Running             kube-vip                  0                   de11001c92427       kube-vip-ha-150873
	11b8446660feb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       2                   90789bebad3c1       storage-provisioner
	3f65a763bc9e2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       1                   90789bebad3c1       storage-provisioner
	3934f37403fb0       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      2 minutes ago        Running             kube-proxy                1                   9a4ff4c8cdaeb       kube-proxy-4k6ws
	e74a5b5b09ea1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   1d197657a29aa       coredns-76f75df574-v7npj
	6931407fcaa90       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   db6b3dc77ab9f       coredns-76f75df574-lv7pk
	d63235fb69e81       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      2 minutes ago        Exited              kube-apiserver            1                   ad883c5504ae7       kube-apiserver-ha-150873
	4a7bcf4449817       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      2 minutes ago        Exited              kindnet-cni               1                   ae41a3393e770       kindnet-twk5c
	30486e357194b       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      2 minutes ago        Exited              kube-controller-manager   1                   7b338618978c4       kube-controller-manager-ha-150873
	cd121dc8b073b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   1426294dc588a       etcd-ha-150873
	290a7180b1a6d       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      2 minutes ago        Running             kube-scheduler            1                   e25cccd588a35       kube-scheduler-ha-150873
	14633f0c6042a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   caa71de8a90bd       busybox-7fdf7869d9-npbvn
	9eed089604ddb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      11 minutes ago       Exited              coredns                   0                   477c4e121d289       coredns-76f75df574-lv7pk
	c0533fe1ed46a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      11 minutes ago       Exited              coredns                   0                   7dcb837166455       coredns-76f75df574-v7npj
	fb2a3cd16e18f       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      11 minutes ago       Exited              kube-proxy                0                   215d1fe94079d       kube-proxy-4k6ws
	e35fb1c2a3e47       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      11 minutes ago       Exited              kube-scheduler            0                   503f0fd829698       kube-scheduler-ha-150873
	9b735e1f5e994       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      11 minutes ago       Exited              etcd                      0                   3fad80ebf2adf       etcd-ha-150873
	
	
	==> coredns [6931407fcaa90fbe7cac83ed492d072d4c9bb966e765cea62da8ff26da536b59] <==
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:59529 - 44673 "HINFO IN 6865337407444359154.4190026639490365014. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009281124s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[984018151]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Apr-2024 21:53:00.782) (total time: 10000ms):
	Trace[984018151]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (21:53:10.782)
	Trace[984018151]: [10.000761949s] [10.000761949s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1469142125]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Apr-2024 21:53:15.210) (total time: 10002ms):
	Trace[1469142125]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (21:53:25.212)
	Trace[1469142125]: [10.00202566s] [10.00202566s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: Trace[1635658144]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Apr-2024 21:53:12.729) (total time: 13491ms):
	Trace[1635658144]: ---"Objects listed" error:<nil> 13491ms (21:53:26.221)
	Trace[1635658144]: [13.491453637s] [13.491453637s] END
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [9eed089604ddb2adc879d9a9093f33fb8fdae41b062d71837f171fc366523b90] <==
	[INFO] 10.244.2.2:35532 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176815s
	[INFO] 10.244.2.2:46490 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00333312s
	[INFO] 10.244.2.2:59282 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000215594s
	[INFO] 10.244.2.2:58799 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.008975865s
	[INFO] 10.244.2.2:45397 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000213098s
	[INFO] 10.244.0.4:52917 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001975693s
	[INFO] 10.244.0.4:52069 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000198372s
	[INFO] 10.244.2.3:49729 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00309164s
	[INFO] 10.244.2.3:49196 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000104729s
	[INFO] 10.244.2.3:37101 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001584562s
	[INFO] 10.244.2.3:33940 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00013568s
	[INFO] 10.244.2.2:34643 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000163795s
	[INFO] 10.244.2.2:58342 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000152915s
	[INFO] 10.244.2.2:59095 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000220546s
	[INFO] 10.244.0.4:52549 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152613s
	[INFO] 10.244.0.4:41887 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065745s
	[INFO] 10.244.2.3:34633 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114623s
	[INFO] 10.244.2.3:40780 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152976s
	[INFO] 10.244.2.3:56929 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159054s
	[INFO] 10.244.0.4:44686 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000143676s
	[INFO] 10.244.2.3:45732 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00027179s
	[INFO] 10.244.2.3:51852 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000093379s
	[INFO] 10.244.2.3:37254 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000217939s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c0533fe1ed46a2b0635aaf8d09515a53eff6c3f8d37327d0c287cabdb47062d2] <==
	[INFO] 10.244.2.2:50233 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000174724s
	[INFO] 10.244.0.4:60778 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115782s
	[INFO] 10.244.0.4:55354 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000140158s
	[INFO] 10.244.0.4:34877 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072811s
	[INFO] 10.244.0.4:40982 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001537342s
	[INFO] 10.244.0.4:36482 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000139429s
	[INFO] 10.244.0.4:48167 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117261s
	[INFO] 10.244.2.3:57824 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145834s
	[INFO] 10.244.2.3:60878 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110073s
	[INFO] 10.244.2.3:50412 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088375s
	[INFO] 10.244.2.3:59910 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095369s
	[INFO] 10.244.2.2:33569 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205878s
	[INFO] 10.244.0.4:60872 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079429s
	[INFO] 10.244.0.4:48499 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000047325s
	[INFO] 10.244.2.3:41098 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109916s
	[INFO] 10.244.2.2:34300 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163456s
	[INFO] 10.244.2.2:45314 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000257345s
	[INFO] 10.244.2.2:55265 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000197798s
	[INFO] 10.244.2.2:59960 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000169363s
	[INFO] 10.244.0.4:49737 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111508s
	[INFO] 10.244.0.4:59509 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009348s
	[INFO] 10.244.0.4:38242 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000134044s
	[INFO] 10.244.2.3:43629 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000170405s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e74a5b5b09ea11da219a8235e9f05fe927caf625ef95cdbf9ddb867aa7bcddce] <==
	[INFO] 127.0.0.1:59682 - 42308 "HINFO IN 8477535371905894803.3423353897684578802. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009750105s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[237118921]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Apr-2024 21:53:00.543) (total time: 10001ms):
	Trace[237118921]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (21:53:10.545)
	Trace[237118921]: [10.001633953s] [10.001633953s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[791440332]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Apr-2024 21:53:00.580) (total time: 10002ms):
	Trace[791440332]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (21:53:10.582)
	Trace[791440332]: [10.002164677s] [10.002164677s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[524857238]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Apr-2024 21:53:02.959) (total time: 10004ms):
	Trace[524857238]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10004ms (21:53:12.964)
	Trace[524857238]: [10.004646672s] [10.004646672s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:33754->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:33754->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:33760->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:33760->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-150873
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-150873
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2
	                    minikube.k8s.io/name=ha-150873
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_10T21_43_30_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Apr 2024 21:43:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-150873
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Apr 2024 21:55:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Apr 2024 21:53:40 +0000   Wed, 10 Apr 2024 21:43:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Apr 2024 21:53:40 +0000   Wed, 10 Apr 2024 21:43:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Apr 2024 21:53:40 +0000   Wed, 10 Apr 2024 21:43:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Apr 2024 21:53:40 +0000   Wed, 10 Apr 2024 21:43:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.12
	  Hostname:    ha-150873
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 210c19b8db52473d80a14cc460d46534
	  System UUID:                210c19b8-db52-473d-80a1-4cc460d46534
	  Boot ID:                    bf770617-465c-438e-8544-6b98882b4c4e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-npbvn             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m3s
	  kube-system                 coredns-76f75df574-lv7pk             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     11m
	  kube-system                 coredns-76f75df574-v7npj             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     11m
	  kube-system                 etcd-ha-150873                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-twk5c                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-150873             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-150873    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-4k6ws                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-150873             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-150873                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 102s                   kube-proxy       
	  Normal   Starting                 11m                    kube-proxy       
	  Normal   Starting                 11m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)      kubelet          Node ha-150873 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-150873 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-150873 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m                    kubelet          Node ha-150873 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m                    kubelet          Node ha-150873 status is now: NodeHasSufficientPID
	  Normal   Starting                 11m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m                    kubelet          Node ha-150873 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           11m                    node-controller  Node ha-150873 event: Registered Node ha-150873 in Controller
	  Normal   NodeReady                11m                    kubelet          Node ha-150873 status is now: NodeReady
	  Normal   RegisteredNode           9m22s                  node-controller  Node ha-150873 event: Registered Node ha-150873 in Controller
	  Normal   RegisteredNode           8m11s                  node-controller  Node ha-150873 event: Registered Node ha-150873 in Controller
	  Normal   RegisteredNode           5m59s                  node-controller  Node ha-150873 event: Registered Node ha-150873 in Controller
	  Warning  ContainerGCFailed        2m46s (x2 over 3m46s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           98s                    node-controller  Node ha-150873 event: Registered Node ha-150873 in Controller
	  Normal   RegisteredNode           83s                    node-controller  Node ha-150873 event: Registered Node ha-150873 in Controller
	  Normal   RegisteredNode           24s                    node-controller  Node ha-150873 event: Registered Node ha-150873 in Controller
	
	
	Name:               ha-150873-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-150873-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2
	                    minikube.k8s.io/name=ha-150873
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_10T21_45_41_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Apr 2024 21:45:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-150873-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Apr 2024 21:55:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Apr 2024 21:54:20 +0000   Wed, 10 Apr 2024 21:45:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Apr 2024 21:54:20 +0000   Wed, 10 Apr 2024 21:45:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Apr 2024 21:54:20 +0000   Wed, 10 Apr 2024 21:45:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Apr 2024 21:54:20 +0000   Wed, 10 Apr 2024 21:45:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.213
	  Hostname:    ha-150873-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d3944eabc4864788a38a16b5adee9ecb
	  System UUID:                d3944eab-c486-4788-a38a-16b5adee9ecb
	  Boot ID:                    96f79656-01cd-45f6-920c-f1545a109dac
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-150873-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m39s
	  kube-system                 kindnet-lgqxz                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m40s
	  kube-system                 kube-apiserver-ha-150873-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m39s
	  kube-system                 kube-controller-manager-ha-150873-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m39s
	  kube-system                 kube-proxy-f5g7z                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m40s
	  kube-system                 kube-scheduler-ha-150873-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m39s
	  kube-system                 kube-vip-ha-150873-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 102s                   kube-proxy       
	  Normal   Starting                 6m13s                  kube-proxy       
	  Normal   Starting                 9m36s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  9m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m40s                  node-controller  Node ha-150873-m02 event: Registered Node ha-150873-m02 in Controller
	  Normal   NodeHasSufficientMemory  9m40s (x8 over 9m40s)  kubelet          Node ha-150873-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m40s (x8 over 9m40s)  kubelet          Node ha-150873-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m40s (x7 over 9m40s)  kubelet          Node ha-150873-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m22s                  node-controller  Node ha-150873-m02 event: Registered Node ha-150873-m02 in Controller
	  Normal   RegisteredNode           8m11s                  node-controller  Node ha-150873-m02 event: Registered Node ha-150873-m02 in Controller
	  Normal   NodeHasNoDiskPressure    6m31s                  kubelet          Node ha-150873-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 6m31s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  6m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m31s                  kubelet          Node ha-150873-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     6m31s                  kubelet          Node ha-150873-m02 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 6m31s                  kubelet          Node ha-150873-m02 has been rebooted, boot id: 96f79656-01cd-45f6-920c-f1545a109dac
	  Normal   RegisteredNode           5m59s                  node-controller  Node ha-150873-m02 event: Registered Node ha-150873-m02 in Controller
	  Normal   RegisteredNode           98s                    node-controller  Node ha-150873-m02 event: Registered Node ha-150873-m02 in Controller
	  Normal   RegisteredNode           83s                    node-controller  Node ha-150873-m02 event: Registered Node ha-150873-m02 in Controller
	  Normal   RegisteredNode           24s                    node-controller  Node ha-150873-m02 event: Registered Node ha-150873-m02 in Controller
	
	
	Name:               ha-150873-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-150873-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2
	                    minikube.k8s.io/name=ha-150873
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_10T21_46_52_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Apr 2024 21:46:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-150873-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Apr 2024 21:55:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Apr 2024 21:54:46 +0000   Wed, 10 Apr 2024 21:54:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Apr 2024 21:54:46 +0000   Wed, 10 Apr 2024 21:54:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Apr 2024 21:54:46 +0000   Wed, 10 Apr 2024 21:54:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Apr 2024 21:54:46 +0000   Wed, 10 Apr 2024 21:54:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.143
	  Hostname:    ha-150873-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 20ca1c8745964ad09a7bfc2f60cad90f
	  System UUID:                20ca1c87-4596-4ad0-9a7b-fc2f60cad90f
	  Boot ID:                    84ba30df-bcde-43c7-acc8-7d23f8526d07
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-c58s7                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m3s
	  default                     busybox-7fdf7869d9-v9dkg                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m3s
	  kube-system                 etcd-ha-150873-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m27s
	  kube-system                 kindnet-8g2nd                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m29s
	  kube-system                 kube-apiserver-ha-150873-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m27s
	  kube-system                 kube-controller-manager-ha-150873-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m27s
	  kube-system                 kube-proxy-crbpf                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m29s
	  kube-system                 kube-scheduler-ha-150873-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m20s
	  kube-system                 kube-vip-ha-150873-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 43s                    kube-proxy       
	  Normal   Starting                 8m25s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  8m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8m29s (x8 over 8m29s)  kubelet          Node ha-150873-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m29s (x8 over 8m29s)  kubelet          Node ha-150873-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m29s (x7 over 8m29s)  kubelet          Node ha-150873-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m27s                  node-controller  Node ha-150873-m03 event: Registered Node ha-150873-m03 in Controller
	  Normal   RegisteredNode           8m25s                  node-controller  Node ha-150873-m03 event: Registered Node ha-150873-m03 in Controller
	  Normal   RegisteredNode           8m11s                  node-controller  Node ha-150873-m03 event: Registered Node ha-150873-m03 in Controller
	  Normal   RegisteredNode           5m59s                  node-controller  Node ha-150873-m03 event: Registered Node ha-150873-m03 in Controller
	  Normal   NodeNotReady             5m25s                  node-controller  Node ha-150873-m03 status is now: NodeNotReady
	  Normal   RegisteredNode           98s                    node-controller  Node ha-150873-m03 event: Registered Node ha-150873-m03 in Controller
	  Normal   RegisteredNode           83s                    node-controller  Node ha-150873-m03 event: Registered Node ha-150873-m03 in Controller
	  Normal   Starting                 60s                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  60s                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  60s (x3 over 60s)      kubelet          Node ha-150873-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x3 over 60s)      kubelet          Node ha-150873-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x3 over 60s)      kubelet          Node ha-150873-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 60s (x2 over 60s)      kubelet          Node ha-150873-m03 has been rebooted, boot id: 84ba30df-bcde-43c7-acc8-7d23f8526d07
	  Normal   NodeReady                60s (x2 over 60s)      kubelet          Node ha-150873-m03 status is now: NodeReady
	  Normal   RegisteredNode           24s                    node-controller  Node ha-150873-m03 event: Registered Node ha-150873-m03 in Controller
	
	
	Name:               ha-150873-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-150873-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2
	                    minikube.k8s.io/name=ha-150873
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_10T21_47_51_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Apr 2024 21:47:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-150873-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Apr 2024 21:55:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Apr 2024 21:55:08 +0000   Wed, 10 Apr 2024 21:55:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Apr 2024 21:55:08 +0000   Wed, 10 Apr 2024 21:55:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Apr 2024 21:55:08 +0000   Wed, 10 Apr 2024 21:55:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Apr 2024 21:55:08 +0000   Wed, 10 Apr 2024 21:55:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.144
	  Hostname:    ha-150873-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 27b922c08d3847d0b0b745258dd668cb
	  System UUID:                27b922c0-8d38-47d0-b0b7-45258dd668cb
	  Boot ID:                    67a43c58-9402-4390-ba4b-7856a43ec8b1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-p9lff       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m27s
	  kube-system                 kube-proxy-8ttrp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m20s                  kube-proxy       
	  Normal   Starting                 4s                     kube-proxy       
	  Normal   NodeHasSufficientMemory  7m27s (x2 over 7m27s)  kubelet          Node ha-150873-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m27s (x2 over 7m27s)  kubelet          Node ha-150873-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m27s (x2 over 7m27s)  kubelet          Node ha-150873-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  7m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           7m26s                  node-controller  Node ha-150873-m04 event: Registered Node ha-150873-m04 in Controller
	  Normal   RegisteredNode           7m23s                  node-controller  Node ha-150873-m04 event: Registered Node ha-150873-m04 in Controller
	  Normal   RegisteredNode           7m22s                  node-controller  Node ha-150873-m04 event: Registered Node ha-150873-m04 in Controller
	  Normal   NodeReady                7m16s                  kubelet          Node ha-150873-m04 status is now: NodeReady
	  Normal   RegisteredNode           6m                     node-controller  Node ha-150873-m04 event: Registered Node ha-150873-m04 in Controller
	  Normal   NodeNotReady             5m20s                  node-controller  Node ha-150873-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           99s                    node-controller  Node ha-150873-m04 event: Registered Node ha-150873-m04 in Controller
	  Normal   RegisteredNode           84s                    node-controller  Node ha-150873-m04 event: Registered Node ha-150873-m04 in Controller
	  Normal   RegisteredNode           25s                    node-controller  Node ha-150873-m04 event: Registered Node ha-150873-m04 in Controller
	  Normal   Starting                 9s                     kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                     kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9s (x2 over 9s)        kubelet          Node ha-150873-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x2 over 9s)        kubelet          Node ha-150873-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x2 over 9s)        kubelet          Node ha-150873-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9s                     kubelet          Node ha-150873-m04 has been rebooted, boot id: 67a43c58-9402-4390-ba4b-7856a43ec8b1
	  Normal   NodeReady                9s                     kubelet          Node ha-150873-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.447730] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.056947] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062963] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.170702] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.150944] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.279384] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.469027] systemd-fstab-generator[769]: Ignoring "noauto" option for root device
	[  +0.060705] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.318971] systemd-fstab-generator[959]: Ignoring "noauto" option for root device
	[  +0.060287] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.868224] systemd-fstab-generator[1385]: Ignoring "noauto" option for root device
	[  +0.092851] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.097200] kauditd_printk_skb: 21 callbacks suppressed
	[Apr10 21:45] kauditd_printk_skb: 72 callbacks suppressed
	[Apr10 21:52] systemd-fstab-generator[3090]: Ignoring "noauto" option for root device
	[  +0.169551] systemd-fstab-generator[3102]: Ignoring "noauto" option for root device
	[  +0.192650] systemd-fstab-generator[3116]: Ignoring "noauto" option for root device
	[  +0.148316] systemd-fstab-generator[3128]: Ignoring "noauto" option for root device
	[  +0.316741] systemd-fstab-generator[3156]: Ignoring "noauto" option for root device
	[  +2.656359] systemd-fstab-generator[3258]: Ignoring "noauto" option for root device
	[  +5.666994] kauditd_printk_skb: 122 callbacks suppressed
	[Apr10 21:53] kauditd_printk_skb: 83 callbacks suppressed
	[  +5.590165] kauditd_printk_skb: 6 callbacks suppressed
	[ +16.699192] kauditd_printk_skb: 11 callbacks suppressed
	[ +16.648316] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [9b735e1f5e9943f9daf11c84d8a1ecb16928f47d7abdcf35ccb712f504af9482] <==
	{"level":"warn","ts":"2024-04-10T21:51:18.251623Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.143:2380/version","remote-member-id":"9296e3e22927d2a2","error":"Get \"https://192.168.39.143:2380/version\": dial tcp 192.168.39.143:2380: i/o timeout"}
	{"level":"warn","ts":"2024-04-10T21:51:18.251729Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"9296e3e22927d2a2","error":"Get \"https://192.168.39.143:2380/version\": dial tcp 192.168.39.143:2380: i/o timeout"}
	{"level":"info","ts":"2024-04-10T21:51:18.261155Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"ce0da4c06908115c"}
	{"level":"warn","ts":"2024-04-10T21:51:18.261347Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ce0da4c06908115c"}
	{"level":"info","ts":"2024-04-10T21:51:18.261405Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ce0da4c06908115c"}
	{"level":"warn","ts":"2024-04-10T21:51:18.261515Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ce0da4c06908115c"}
	{"level":"info","ts":"2024-04-10T21:51:18.261552Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ce0da4c06908115c"}
	{"level":"info","ts":"2024-04-10T21:51:18.261712Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ab0e927fe14112bb","remote-peer-id":"ce0da4c06908115c"}
	{"level":"warn","ts":"2024-04-10T21:51:18.262117Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ab0e927fe14112bb","remote-peer-id":"ce0da4c06908115c","error":"context canceled"}
	{"level":"warn","ts":"2024-04-10T21:51:18.262207Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"ce0da4c06908115c","error":"failed to read ce0da4c06908115c on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-04-10T21:51:18.26227Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ab0e927fe14112bb","remote-peer-id":"ce0da4c06908115c"}
	{"level":"warn","ts":"2024-04-10T21:51:18.262466Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"ab0e927fe14112bb","remote-peer-id":"ce0da4c06908115c","error":"context canceled"}
	{"level":"info","ts":"2024-04-10T21:51:18.262536Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ab0e927fe14112bb","remote-peer-id":"ce0da4c06908115c"}
	{"level":"info","ts":"2024-04-10T21:51:18.262582Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"ce0da4c06908115c"}
	{"level":"info","ts":"2024-04-10T21:51:18.262614Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"9296e3e22927d2a2"}
	{"level":"info","ts":"2024-04-10T21:51:18.26265Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9296e3e22927d2a2"}
	{"level":"info","ts":"2024-04-10T21:51:18.262696Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9296e3e22927d2a2"}
	{"level":"info","ts":"2024-04-10T21:51:18.263541Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ab0e927fe14112bb","remote-peer-id":"9296e3e22927d2a2"}
	{"level":"info","ts":"2024-04-10T21:51:18.266764Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ab0e927fe14112bb","remote-peer-id":"9296e3e22927d2a2"}
	{"level":"info","ts":"2024-04-10T21:51:18.266883Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ab0e927fe14112bb","remote-peer-id":"9296e3e22927d2a2"}
	{"level":"info","ts":"2024-04-10T21:51:18.266925Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"9296e3e22927d2a2"}
	{"level":"info","ts":"2024-04-10T21:51:18.271774Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.12:2380"}
	{"level":"warn","ts":"2024-04-10T21:51:18.272072Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.213:39090","server-name":"","error":"read tcp 192.168.39.12:2380->192.168.39.213:39090: use of closed network connection"}
	{"level":"info","ts":"2024-04-10T21:51:18.823098Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.12:2380"}
	{"level":"info","ts":"2024-04-10T21:51:18.823212Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-150873","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.12:2380"],"advertise-client-urls":["https://192.168.39.12:2379"]}
	
	
	==> etcd [cd121dc8b073b74475d28df919aa7d986e22e9916bf717630de3c193f121d3bf] <==
	{"level":"warn","ts":"2024-04-10T21:54:14.498304Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"9296e3e22927d2a2","rtt":"0s","error":"dial tcp 192.168.39.143:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-10T21:54:14.498352Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"9296e3e22927d2a2","rtt":"0s","error":"dial tcp 192.168.39.143:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-10T21:54:17.880226Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.143:2380/version","remote-member-id":"9296e3e22927d2a2","error":"Get \"https://192.168.39.143:2380/version\": dial tcp 192.168.39.143:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-10T21:54:17.880321Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"9296e3e22927d2a2","error":"Get \"https://192.168.39.143:2380/version\": dial tcp 192.168.39.143:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-10T21:54:19.499287Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"9296e3e22927d2a2","rtt":"0s","error":"dial tcp 192.168.39.143:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-10T21:54:19.499318Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"9296e3e22927d2a2","rtt":"0s","error":"dial tcp 192.168.39.143:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-10T21:54:21.882491Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.143:2380/version","remote-member-id":"9296e3e22927d2a2","error":"Get \"https://192.168.39.143:2380/version\": dial tcp 192.168.39.143:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-10T21:54:21.882647Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"9296e3e22927d2a2","error":"Get \"https://192.168.39.143:2380/version\": dial tcp 192.168.39.143:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-10T21:54:24.500078Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"9296e3e22927d2a2","rtt":"0s","error":"dial tcp 192.168.39.143:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-10T21:54:24.500241Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"9296e3e22927d2a2","rtt":"0s","error":"dial tcp 192.168.39.143:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-10T21:54:25.885075Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.143:2380/version","remote-member-id":"9296e3e22927d2a2","error":"Get \"https://192.168.39.143:2380/version\": dial tcp 192.168.39.143:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-10T21:54:25.885144Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"9296e3e22927d2a2","error":"Get \"https://192.168.39.143:2380/version\": dial tcp 192.168.39.143:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-10T21:54:29.501256Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"9296e3e22927d2a2","rtt":"0s","error":"dial tcp 192.168.39.143:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-10T21:54:29.501503Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"9296e3e22927d2a2","rtt":"0s","error":"dial tcp 192.168.39.143:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-10T21:54:29.887282Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.143:2380/version","remote-member-id":"9296e3e22927d2a2","error":"Get \"https://192.168.39.143:2380/version\": dial tcp 192.168.39.143:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-10T21:54:29.887435Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"9296e3e22927d2a2","error":"Get \"https://192.168.39.143:2380/version\": dial tcp 192.168.39.143:2380: connect: connection refused"}
	{"level":"info","ts":"2024-04-10T21:54:31.892371Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"9296e3e22927d2a2"}
	{"level":"info","ts":"2024-04-10T21:54:31.892565Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"ab0e927fe14112bb","remote-peer-id":"9296e3e22927d2a2"}
	{"level":"info","ts":"2024-04-10T21:54:31.900889Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ab0e927fe14112bb","remote-peer-id":"9296e3e22927d2a2"}
	{"level":"info","ts":"2024-04-10T21:54:31.917631Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"ab0e927fe14112bb","to":"9296e3e22927d2a2","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-04-10T21:54:31.919836Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"ab0e927fe14112bb","remote-peer-id":"9296e3e22927d2a2"}
	{"level":"info","ts":"2024-04-10T21:54:31.91968Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"ab0e927fe14112bb","to":"9296e3e22927d2a2","stream-type":"stream Message"}
	{"level":"info","ts":"2024-04-10T21:54:31.920113Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"ab0e927fe14112bb","remote-peer-id":"9296e3e22927d2a2"}
	{"level":"warn","ts":"2024-04-10T21:54:34.502007Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"9296e3e22927d2a2","rtt":"0s","error":"dial tcp 192.168.39.143:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-10T21:54:34.50211Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"9296e3e22927d2a2","rtt":"0s","error":"dial tcp 192.168.39.143:2380: connect: connection refused"}
	
	
	==> kernel <==
	 21:55:17 up 12 min,  0 users,  load average: 0.42, 0.42, 0.26
	Linux ha-150873 5.10.207 #1 SMP Wed Apr 10 14:57:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4a7bcf4449817eb8338b07a8e5efe49e28bcc08775cb33f9491a54950a0f3757] <==
	I0410 21:52:59.385771       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0410 21:52:59.387907       1 main.go:107] hostIP = 192.168.39.12
	podIP = 192.168.39.12
	I0410 21:52:59.388262       1 main.go:116] setting mtu 1500 for CNI 
	I0410 21:52:59.388328       1 main.go:146] kindnetd IP family: "ipv4"
	I0410 21:52:59.388715       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0410 21:53:09.651210       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0410 21:53:09.652240       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0410 21:53:21.331104       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 192.168.122.62:57886->10.96.0.1:443: read: connection reset by peer
	I0410 21:53:23.334966       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0410 21:53:26.336830       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [967394af8ee84d12844f4c1fe58d3268b52d608806a9bbe4d030c8f4fab95b20] <==
	I0410 21:54:43.480276       1 main.go:250] Node ha-150873-m04 has CIDR [10.244.3.0/24] 
	I0410 21:54:53.492334       1 main.go:223] Handling node with IPs: map[192.168.39.12:{}]
	I0410 21:54:53.492477       1 main.go:227] handling current node
	I0410 21:54:53.492531       1 main.go:223] Handling node with IPs: map[192.168.39.213:{}]
	I0410 21:54:53.492558       1 main.go:250] Node ha-150873-m02 has CIDR [10.244.1.0/24] 
	I0410 21:54:53.492698       1 main.go:223] Handling node with IPs: map[192.168.39.143:{}]
	I0410 21:54:53.492737       1 main.go:250] Node ha-150873-m03 has CIDR [10.244.2.0/24] 
	I0410 21:54:53.492821       1 main.go:223] Handling node with IPs: map[192.168.39.144:{}]
	I0410 21:54:53.492857       1 main.go:250] Node ha-150873-m04 has CIDR [10.244.3.0/24] 
	I0410 21:55:03.501613       1 main.go:223] Handling node with IPs: map[192.168.39.12:{}]
	I0410 21:55:03.501711       1 main.go:227] handling current node
	I0410 21:55:03.501754       1 main.go:223] Handling node with IPs: map[192.168.39.213:{}]
	I0410 21:55:03.501783       1 main.go:250] Node ha-150873-m02 has CIDR [10.244.1.0/24] 
	I0410 21:55:03.501927       1 main.go:223] Handling node with IPs: map[192.168.39.143:{}]
	I0410 21:55:03.501960       1 main.go:250] Node ha-150873-m03 has CIDR [10.244.2.0/24] 
	I0410 21:55:03.502120       1 main.go:223] Handling node with IPs: map[192.168.39.144:{}]
	I0410 21:55:03.502161       1 main.go:250] Node ha-150873-m04 has CIDR [10.244.3.0/24] 
	I0410 21:55:13.517142       1 main.go:223] Handling node with IPs: map[192.168.39.12:{}]
	I0410 21:55:13.517186       1 main.go:227] handling current node
	I0410 21:55:13.517197       1 main.go:223] Handling node with IPs: map[192.168.39.213:{}]
	I0410 21:55:13.517203       1 main.go:250] Node ha-150873-m02 has CIDR [10.244.1.0/24] 
	I0410 21:55:13.517309       1 main.go:223] Handling node with IPs: map[192.168.39.143:{}]
	I0410 21:55:13.517319       1 main.go:250] Node ha-150873-m03 has CIDR [10.244.2.0/24] 
	I0410 21:55:13.517356       1 main.go:223] Handling node with IPs: map[192.168.39.144:{}]
	I0410 21:55:13.517385       1 main.go:250] Node ha-150873-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [d63235fb69e81b7ed849622e86a6ab34f47f6d81af7dfbce078caf844c937923] <==
	I0410 21:52:59.530397       1 options.go:222] external host was not specified, using 192.168.39.12
	I0410 21:52:59.532350       1 server.go:148] Version: v1.29.3
	I0410 21:52:59.532399       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0410 21:53:00.300095       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0410 21:53:00.309296       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0410 21:53:00.309373       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0410 21:53:00.309772       1 instance.go:297] Using reconciler: lease
	W0410 21:53:20.295752       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0410 21:53:20.297580       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0410 21:53:20.311514       1 instance.go:290] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [e235f69edc18857fcd2070c996c68b599ab46f71b62c95fcc7e720038bca5907] <==
	I0410 21:53:32.542379       1 establishing_controller.go:76] Starting EstablishingController
	I0410 21:53:32.542415       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0410 21:53:32.542485       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0410 21:53:32.542524       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0410 21:53:32.543753       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0410 21:53:32.543790       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0410 21:53:32.644233       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0410 21:53:32.645746       1 aggregator.go:165] initial CRD sync complete...
	I0410 21:53:32.645785       1 autoregister_controller.go:141] Starting autoregister controller
	I0410 21:53:32.645794       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0410 21:53:32.647236       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0410 21:53:32.647268       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0410 21:53:32.661864       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0410 21:53:32.671939       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	W0410 21:53:32.709057       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.213]
	I0410 21:53:32.711221       1 controller.go:624] quota admission added evaluator for: endpoints
	I0410 21:53:32.728787       1 shared_informer.go:318] Caches are synced for configmaps
	I0410 21:53:32.731252       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0410 21:53:32.732864       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0410 21:53:32.738893       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0410 21:53:32.739301       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E0410 21:53:32.743789       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0410 21:53:32.747674       1 cache.go:39] Caches are synced for autoregister controller
	I0410 21:53:33.537493       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0410 21:53:33.978663       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.12 192.168.39.213]
	
	
	==> kube-controller-manager [30486e357194bcf533da86b9e1d1529c00dac6b511afebe2045eb8d0b254e33d] <==
	I0410 21:53:00.473448       1 serving.go:380] Generated self-signed cert in-memory
	I0410 21:53:00.911642       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0410 21:53:00.911695       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0410 21:53:00.913765       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0410 21:53:00.913925       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0410 21:53:00.914811       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0410 21:53:00.914887       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0410 21:53:21.321644       1 controllermanager.go:232] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.12:8443/healthz\": dial tcp 192.168.39.12:8443: connect: connection refused"
	
	
	==> kube-controller-manager [edff49423a0137fe750956ba320c3555c41762c96e4b52d61dd538f1387f3e8b] <==
	I0410 21:53:53.131584       1 shared_informer.go:318] Caches are synced for daemon sets
	I0410 21:53:53.148210       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-150873-m02"
	I0410 21:53:53.148971       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-150873-m03"
	I0410 21:53:53.149146       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-150873"
	I0410 21:53:53.148973       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-150873-m04"
	I0410 21:53:53.149238       1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0410 21:53:53.149264       1 event.go:376] "Event occurred" object="ha-150873" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-150873 event: Registered Node ha-150873 in Controller"
	I0410 21:53:53.174070       1 shared_informer.go:318] Caches are synced for resource quota
	I0410 21:53:53.527435       1 shared_informer.go:318] Caches are synced for garbage collector
	I0410 21:53:53.527490       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0410 21:53:53.546720       1 shared_informer.go:318] Caches are synced for garbage collector
	I0410 21:53:55.436323       1 event.go:376] "Event occurred" object="kube-system/kube-dns" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints \"kube-dns\": the object has been modified; please apply your changes to the latest version and try again"
	I0410 21:53:55.452968       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="failed to update kube-dns-5xfwx EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-5xfwx\": the object has been modified; please apply your changes to the latest version and try again"
	I0410 21:53:55.453509       1 event.go:364] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"f9ce723f-b216-4cff-9dad-695b62b11183", APIVersion:"v1", ResourceVersion:"285", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-5xfwx EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-5xfwx": the object has been modified; please apply your changes to the latest version and try again
	I0410 21:53:55.470238       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="71.752308ms"
	I0410 21:53:55.470367       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="70.861µs"
	I0410 21:54:17.239489       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="178.347µs"
	I0410 21:54:17.279940       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="74.038µs"
	I0410 21:54:18.173599       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-v9dkg" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-v9dkg"
	I0410 21:54:18.173645       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-c58s7" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-c58s7"
	I0410 21:54:40.766956       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="50.829874ms"
	I0410 21:54:40.767268       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="62.587µs"
	I0410 21:54:41.737051       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="13.486557ms"
	I0410 21:54:41.737220       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="81.92µs"
	I0410 21:55:08.353680       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-150873-m04"
	
	
	==> kube-proxy [3934f37403fb0beca32412cd8af38217c3eaabcbd92daf292e726a56c1e6a666] <==
	I0410 21:53:00.635114       1 server_others.go:72] "Using iptables proxy"
	E0410 21:53:10.639387       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-150873\": net/http: TLS handshake timeout"
	E0410 21:53:23.824509       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-150873\": dial tcp 192.168.39.254:8443: connect: no route to host - error from a previous attempt: read tcp 192.168.39.254:54440->192.168.39.254:8443: read: connection reset by peer"
	E0410 21:53:29.968692       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-150873\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0410 21:53:34.072077       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.12"]
	I0410 21:53:34.119667       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0410 21:53:34.119698       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0410 21:53:34.119757       1 server_others.go:168] "Using iptables Proxier"
	I0410 21:53:34.122961       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0410 21:53:34.123431       1 server.go:865] "Version info" version="v1.29.3"
	I0410 21:53:34.123520       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0410 21:53:34.125671       1 config.go:188] "Starting service config controller"
	I0410 21:53:34.125768       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0410 21:53:34.125820       1 config.go:97] "Starting endpoint slice config controller"
	I0410 21:53:34.125843       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0410 21:53:34.127857       1 config.go:315] "Starting node config controller"
	I0410 21:53:34.127897       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0410 21:53:34.226948       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0410 21:53:34.227076       1 shared_informer.go:318] Caches are synced for service config
	I0410 21:53:34.228561       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [fb2a3cd16e18f44024f6ab2f1fbc983d58ea0b2f8dbeb32ab81ec676fc72e330] <==
	I0410 21:43:43.749344       1 server_others.go:72] "Using iptables proxy"
	I0410 21:43:43.784502       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.12"]
	I0410 21:43:43.854319       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0410 21:43:43.854383       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0410 21:43:43.854398       1 server_others.go:168] "Using iptables Proxier"
	I0410 21:43:43.858159       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0410 21:43:43.858833       1 server.go:865] "Version info" version="v1.29.3"
	I0410 21:43:43.858869       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0410 21:43:43.860754       1 config.go:188] "Starting service config controller"
	I0410 21:43:43.861101       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0410 21:43:43.861154       1 config.go:97] "Starting endpoint slice config controller"
	I0410 21:43:43.861161       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0410 21:43:43.862168       1 config.go:315] "Starting node config controller"
	I0410 21:43:43.862199       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0410 21:43:43.961250       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0410 21:43:43.961252       1 shared_informer.go:318] Caches are synced for service config
	I0410 21:43:43.963072       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [290a7180b1a6d450d47f0ee5459e99f09e131f3c4f6ff26fbab860c8133ae13e] <==
	W0410 21:53:29.052401       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.12:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0410 21:53:29.052514       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.12:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	W0410 21:53:29.405957       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.12:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0410 21:53:29.406230       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.12:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	W0410 21:53:29.504269       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: Get "https://192.168.39.12:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0410 21:53:29.504391       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.12:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	W0410 21:53:29.717653       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: Get "https://192.168.39.12:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0410 21:53:29.717757       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.12:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	W0410 21:53:30.130751       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: Get "https://192.168.39.12:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0410 21:53:30.130844       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.12:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	W0410 21:53:30.210193       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: Get "https://192.168.39.12:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0410 21:53:30.210262       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.12:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	W0410 21:53:30.457616       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://192.168.39.12:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0410 21:53:30.457698       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.12:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	W0410 21:53:30.562591       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://192.168.39.12:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0410 21:53:30.562742       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.12:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	W0410 21:53:30.576712       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: Get "https://192.168.39.12:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0410 21:53:30.576837       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.12:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	W0410 21:53:32.569826       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0410 21:53:32.571178       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0410 21:53:32.571410       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0410 21:53:32.573114       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0410 21:53:32.573459       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0410 21:53:32.573561       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0410 21:53:35.327664       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e35fb1c2a3e4755b04eca6fabf4b21e19e1b19765a53119054c85ec43b017196] <==
	E0410 21:43:28.107358       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0410 21:43:28.162684       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0410 21:43:28.162743       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0410 21:43:28.163695       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0410 21:43:28.163748       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0410 21:43:31.036697       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0410 21:47:13.512657       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-c58s7\": pod busybox-7fdf7869d9-c58s7 is already assigned to node \"ha-150873-m03\"" plugin="DefaultBinder" pod="default/busybox-7fdf7869d9-c58s7" node="ha-150873-m03"
	E0410 21:47:13.513348       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-c58s7\": pod busybox-7fdf7869d9-c58s7 is already assigned to node \"ha-150873-m03\"" pod="default/busybox-7fdf7869d9-c58s7"
	I0410 21:47:13.559078       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="9ff96a1f-d71f-41b7-89ec-8cb7a94c0231" pod="default/busybox-7fdf7869d9-npbvn" assumedNode="ha-150873" currentNode="ha-150873-m02"
	E0410 21:47:13.566632       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-v9dkg\": pod busybox-7fdf7869d9-v9dkg is already assigned to node \"ha-150873-m03\"" plugin="DefaultBinder" pod="default/busybox-7fdf7869d9-v9dkg" node="ha-150873-m03"
	E0410 21:47:13.567918       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod cb1241f3-24e3-42dc-999d-813be2d647d3(default/busybox-7fdf7869d9-v9dkg) wasn't assumed so cannot be forgotten"
	E0410 21:47:13.570306       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-npbvn\": pod busybox-7fdf7869d9-npbvn is already assigned to node \"ha-150873\"" plugin="DefaultBinder" pod="default/busybox-7fdf7869d9-npbvn" node="ha-150873-m02"
	E0410 21:47:13.570933       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 9ff96a1f-d71f-41b7-89ec-8cb7a94c0231(default/busybox-7fdf7869d9-npbvn) was assumed on ha-150873-m02 but assigned to ha-150873"
	E0410 21:47:13.571181       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-npbvn\": pod busybox-7fdf7869d9-npbvn is already assigned to node \"ha-150873\"" pod="default/busybox-7fdf7869d9-npbvn"
	I0410 21:47:13.571297       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7fdf7869d9-npbvn" node="ha-150873"
	E0410 21:47:13.573202       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-v9dkg\": pod busybox-7fdf7869d9-v9dkg is already assigned to node \"ha-150873-m03\"" pod="default/busybox-7fdf7869d9-v9dkg"
	I0410 21:47:13.573415       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7fdf7869d9-v9dkg" node="ha-150873-m03"
	E0410 21:47:51.027492       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-8ttrp\": pod kube-proxy-8ttrp is already assigned to node \"ha-150873-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-8ttrp" node="ha-150873-m04"
	E0410 21:47:51.027849       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod fc2bb477-e139-43c4-a27a-00a2c214d2d3(kube-system/kube-proxy-8ttrp) wasn't assumed so cannot be forgotten"
	E0410 21:47:51.028127       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-8ttrp\": pod kube-proxy-8ttrp is already assigned to node \"ha-150873-m04\"" pod="kube-system/kube-proxy-8ttrp"
	I0410 21:47:51.028323       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-8ttrp" node="ha-150873-m04"
	E0410 21:47:51.044462       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-p9lff\": pod kindnet-p9lff is already assigned to node \"ha-150873-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-p9lff" node="ha-150873-m04"
	E0410 21:47:51.044538       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 3e6cb7bd-84c9-4146-a1a0-32e97b598ec2(kube-system/kindnet-p9lff) wasn't assumed so cannot be forgotten"
	E0410 21:47:51.044623       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-p9lff\": pod kindnet-p9lff is already assigned to node \"ha-150873-m04\"" pod="kube-system/kindnet-p9lff"
	I0410 21:47:51.044656       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-p9lff" node="ha-150873-m04"
	
	
	==> kubelet <==
	Apr 10 21:53:29 ha-150873 kubelet[1392]: E0410 21:53:29.967299    1392 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Apr 10 21:53:29 ha-150873 kubelet[1392]: E0410 21:53:29.966732    1392 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?allowWatchBookmarks=true&resourceVersion=1912&timeout=6m51s&timeoutSeconds=411&watch=true": dial tcp 192.168.39.254:8443: connect: no route to host
	Apr 10 21:53:30 ha-150873 kubelet[1392]: I0410 21:53:30.341382    1392 scope.go:117] "RemoveContainer" containerID="d63235fb69e81b7ed849622e86a6ab34f47f6d81af7dfbce078caf844c937923"
	Apr 10 21:53:30 ha-150873 kubelet[1392]: I0410 21:53:30.420690    1392 scope.go:117] "RemoveContainer" containerID="a801aece5216f7e138337b799c1d603457c75338bc2d81915b8a2438f4c87070"
	Apr 10 21:53:30 ha-150873 kubelet[1392]: E0410 21:53:30.534715    1392 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 10 21:53:30 ha-150873 kubelet[1392]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 10 21:53:30 ha-150873 kubelet[1392]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 10 21:53:30 ha-150873 kubelet[1392]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 10 21:53:30 ha-150873 kubelet[1392]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 10 21:53:31 ha-150873 kubelet[1392]: I0410 21:53:31.162342    1392 scope.go:117] "RemoveContainer" containerID="5565984567f9b26d4eed3577b07de6834f5ef76975cf4e514b712d250b43da66"
	Apr 10 21:53:31 ha-150873 kubelet[1392]: I0410 21:53:31.162658    1392 scope.go:117] "RemoveContainer" containerID="4a7bcf4449817eb8338b07a8e5efe49e28bcc08775cb33f9491a54950a0f3757"
	Apr 10 21:53:31 ha-150873 kubelet[1392]: E0410 21:53:31.162888    1392 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kindnet-cni pod=kindnet-twk5c_kube-system(ebfddcc6-a190-4756-9096-1dc2cec68cf7)\"" pod="kube-system/kindnet-twk5c" podUID="ebfddcc6-a190-4756-9096-1dc2cec68cf7"
	Apr 10 21:53:33 ha-150873 kubelet[1392]: E0410 21:53:33.038859    1392 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-150873?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Apr 10 21:53:33 ha-150873 kubelet[1392]: I0410 21:53:33.038858    1392 status_manager.go:853] "Failed to get status for pod" podUID="e6eff29f33f6e236015d4efe6b97593c" pod="kube-system/kube-apiserver-ha-150873" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-150873\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Apr 10 21:53:40 ha-150873 kubelet[1392]: I0410 21:53:40.463862    1392 scope.go:117] "RemoveContainer" containerID="30486e357194bcf533da86b9e1d1529c00dac6b511afebe2045eb8d0b254e33d"
	Apr 10 21:53:42 ha-150873 kubelet[1392]: I0410 21:53:42.440441    1392 scope.go:117] "RemoveContainer" containerID="4a7bcf4449817eb8338b07a8e5efe49e28bcc08775cb33f9491a54950a0f3757"
	Apr 10 21:54:11 ha-150873 kubelet[1392]: I0410 21:54:11.630090    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-7fdf7869d9-npbvn" podStartSLOduration=416.31075719 podStartE2EDuration="6m58.62995677s" podCreationTimestamp="2024-04-10 21:47:13 +0000 UTC" firstStartedPulling="2024-04-10 21:47:14.204073658 +0000 UTC m=+224.015178302" lastFinishedPulling="2024-04-10 21:47:16.523273242 +0000 UTC m=+226.334377882" observedRunningTime="2024-04-10 21:47:17.501050937 +0000 UTC m=+227.312155596" watchObservedRunningTime="2024-04-10 21:54:11.62995677 +0000 UTC m=+641.441061429"
	Apr 10 21:54:30 ha-150873 kubelet[1392]: I0410 21:54:30.440873    1392 kubelet.go:1903] "Trying to delete pod" pod="kube-system/kube-vip-ha-150873" podUID="ec4b952a-61d5-469d-a526-74228e791782"
	Apr 10 21:54:30 ha-150873 kubelet[1392]: I0410 21:54:30.465317    1392 kubelet.go:1908] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-150873"
	Apr 10 21:54:30 ha-150873 kubelet[1392]: E0410 21:54:30.517725    1392 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 10 21:54:30 ha-150873 kubelet[1392]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 10 21:54:30 ha-150873 kubelet[1392]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 10 21:54:30 ha-150873 kubelet[1392]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 10 21:54:30 ha-150873 kubelet[1392]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 10 21:54:40 ha-150873 kubelet[1392]: I0410 21:54:40.462166    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-vip-ha-150873" podStartSLOduration=10.462051919 podStartE2EDuration="10.462051919s" podCreationTimestamp="2024-04-10 21:54:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-10 21:54:40.461606086 +0000 UTC m=+670.272710745" watchObservedRunningTime="2024-04-10 21:54:40.462051919 +0000 UTC m=+670.273156577"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0410 21:55:15.770486   30100 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18610-5679/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-150873 -n ha-150873
helpers_test.go:261: (dbg) Run:  kubectl --context ha-150873 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (364.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 stop -v=7 --alsologtostderr
E0410 21:56:54.112459   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
E0410 21:56:59.610566   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-150873 stop -v=7 --alsologtostderr: exit status 82 (2m0.489924569s)

                                                
                                                
-- stdout --
	* Stopping node "ha-150873-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0410 21:55:36.102598   30512 out.go:291] Setting OutFile to fd 1 ...
	I0410 21:55:36.102762   30512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 21:55:36.102775   30512 out.go:304] Setting ErrFile to fd 2...
	I0410 21:55:36.102781   30512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 21:55:36.103060   30512 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 21:55:36.103308   30512 out.go:298] Setting JSON to false
	I0410 21:55:36.103384   30512 mustload.go:65] Loading cluster: ha-150873
	I0410 21:55:36.103727   30512 config.go:182] Loaded profile config "ha-150873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 21:55:36.103840   30512 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/config.json ...
	I0410 21:55:36.104054   30512 mustload.go:65] Loading cluster: ha-150873
	I0410 21:55:36.104185   30512 config.go:182] Loaded profile config "ha-150873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 21:55:36.104214   30512 stop.go:39] StopHost: ha-150873-m04
	I0410 21:55:36.104728   30512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:55:36.104788   30512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:55:36.119385   30512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36865
	I0410 21:55:36.119815   30512 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:55:36.120428   30512 main.go:141] libmachine: Using API Version  1
	I0410 21:55:36.120450   30512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:55:36.120851   30512 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:55:36.123362   30512 out.go:177] * Stopping node "ha-150873-m04"  ...
	I0410 21:55:36.124586   30512 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0410 21:55:36.124618   30512 main.go:141] libmachine: (ha-150873-m04) Calling .DriverName
	I0410 21:55:36.124822   30512 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0410 21:55:36.124850   30512 main.go:141] libmachine: (ha-150873-m04) Calling .GetSSHHostname
	I0410 21:55:36.127815   30512 main.go:141] libmachine: (ha-150873-m04) DBG | domain ha-150873-m04 has defined MAC address 52:54:00:56:5f:bd in network mk-ha-150873
	I0410 21:55:36.128296   30512 main.go:141] libmachine: (ha-150873-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:5f:bd", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:55:03 +0000 UTC Type:0 Mac:52:54:00:56:5f:bd Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:ha-150873-m04 Clientid:01:52:54:00:56:5f:bd}
	I0410 21:55:36.128326   30512 main.go:141] libmachine: (ha-150873-m04) DBG | domain ha-150873-m04 has defined IP address 192.168.39.144 and MAC address 52:54:00:56:5f:bd in network mk-ha-150873
	I0410 21:55:36.128486   30512 main.go:141] libmachine: (ha-150873-m04) Calling .GetSSHPort
	I0410 21:55:36.128649   30512 main.go:141] libmachine: (ha-150873-m04) Calling .GetSSHKeyPath
	I0410 21:55:36.128808   30512 main.go:141] libmachine: (ha-150873-m04) Calling .GetSSHUsername
	I0410 21:55:36.128933   30512 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/ha-150873-m04/id_rsa Username:docker}
	I0410 21:55:36.207970   30512 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0410 21:55:36.261730   30512 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0410 21:55:36.315812   30512 main.go:141] libmachine: Stopping "ha-150873-m04"...
	I0410 21:55:36.315845   30512 main.go:141] libmachine: (ha-150873-m04) Calling .GetState
	I0410 21:55:36.317343   30512 main.go:141] libmachine: (ha-150873-m04) Calling .Stop
	I0410 21:55:36.321650   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 0/120
	I0410 21:55:37.323080   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 1/120
	I0410 21:55:38.324479   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 2/120
	I0410 21:55:39.326510   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 3/120
	I0410 21:55:40.328260   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 4/120
	I0410 21:55:41.330154   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 5/120
	I0410 21:55:42.331572   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 6/120
	I0410 21:55:43.333005   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 7/120
	I0410 21:55:44.334730   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 8/120
	I0410 21:55:45.336695   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 9/120
	I0410 21:55:46.338676   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 10/120
	I0410 21:55:47.340261   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 11/120
	I0410 21:55:48.342169   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 12/120
	I0410 21:55:49.343546   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 13/120
	I0410 21:55:50.345910   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 14/120
	I0410 21:55:51.348005   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 15/120
	I0410 21:55:52.349748   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 16/120
	I0410 21:55:53.351490   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 17/120
	I0410 21:55:54.353299   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 18/120
	I0410 21:55:55.354988   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 19/120
	I0410 21:55:56.357347   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 20/120
	I0410 21:55:57.359212   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 21/120
	I0410 21:55:58.360743   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 22/120
	I0410 21:55:59.363151   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 23/120
	I0410 21:56:00.364735   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 24/120
	I0410 21:56:01.366808   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 25/120
	I0410 21:56:02.368283   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 26/120
	I0410 21:56:03.369860   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 27/120
	I0410 21:56:04.371520   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 28/120
	I0410 21:56:05.373059   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 29/120
	I0410 21:56:06.375227   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 30/120
	I0410 21:56:07.376982   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 31/120
	I0410 21:56:08.379191   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 32/120
	I0410 21:56:09.380582   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 33/120
	I0410 21:56:10.381888   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 34/120
	I0410 21:56:11.383994   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 35/120
	I0410 21:56:12.385973   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 36/120
	I0410 21:56:13.387370   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 37/120
	I0410 21:56:14.388729   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 38/120
	I0410 21:56:15.390112   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 39/120
	I0410 21:56:16.392485   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 40/120
	I0410 21:56:17.394112   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 41/120
	I0410 21:56:18.395966   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 42/120
	I0410 21:56:19.397771   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 43/120
	I0410 21:56:20.399414   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 44/120
	I0410 21:56:21.401826   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 45/120
	I0410 21:56:22.403511   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 46/120
	I0410 21:56:23.404899   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 47/120
	I0410 21:56:24.406904   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 48/120
	I0410 21:56:25.408106   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 49/120
	I0410 21:56:26.410406   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 50/120
	I0410 21:56:27.412595   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 51/120
	I0410 21:56:28.415141   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 52/120
	I0410 21:56:29.416581   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 53/120
	I0410 21:56:30.418924   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 54/120
	I0410 21:56:31.420973   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 55/120
	I0410 21:56:32.422833   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 56/120
	I0410 21:56:33.424179   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 57/120
	I0410 21:56:34.425556   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 58/120
	I0410 21:56:35.426842   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 59/120
	I0410 21:56:36.429040   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 60/120
	I0410 21:56:37.430895   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 61/120
	I0410 21:56:38.433125   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 62/120
	I0410 21:56:39.434286   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 63/120
	I0410 21:56:40.436282   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 64/120
	I0410 21:56:41.438495   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 65/120
	I0410 21:56:42.440185   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 66/120
	I0410 21:56:43.442222   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 67/120
	I0410 21:56:44.443733   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 68/120
	I0410 21:56:45.445129   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 69/120
	I0410 21:56:46.447442   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 70/120
	I0410 21:56:47.448905   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 71/120
	I0410 21:56:48.450358   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 72/120
	I0410 21:56:49.451991   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 73/120
	I0410 21:56:50.454113   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 74/120
	I0410 21:56:51.455347   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 75/120
	I0410 21:56:52.456985   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 76/120
	I0410 21:56:53.458557   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 77/120
	I0410 21:56:54.460136   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 78/120
	I0410 21:56:55.461771   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 79/120
	I0410 21:56:56.464073   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 80/120
	I0410 21:56:57.465668   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 81/120
	I0410 21:56:58.467036   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 82/120
	I0410 21:56:59.469091   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 83/120
	I0410 21:57:00.471022   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 84/120
	I0410 21:57:01.473027   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 85/120
	I0410 21:57:02.474568   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 86/120
	I0410 21:57:03.476038   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 87/120
	I0410 21:57:04.477890   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 88/120
	I0410 21:57:05.479364   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 89/120
	I0410 21:57:06.481590   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 90/120
	I0410 21:57:07.482919   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 91/120
	I0410 21:57:08.484333   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 92/120
	I0410 21:57:09.485649   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 93/120
	I0410 21:57:10.487205   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 94/120
	I0410 21:57:11.489291   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 95/120
	I0410 21:57:12.490958   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 96/120
	I0410 21:57:13.492261   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 97/120
	I0410 21:57:14.494328   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 98/120
	I0410 21:57:15.495809   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 99/120
	I0410 21:57:16.497997   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 100/120
	I0410 21:57:17.499484   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 101/120
	I0410 21:57:18.500926   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 102/120
	I0410 21:57:19.502997   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 103/120
	I0410 21:57:20.504576   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 104/120
	I0410 21:57:21.506389   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 105/120
	I0410 21:57:22.507653   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 106/120
	I0410 21:57:23.509219   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 107/120
	I0410 21:57:24.510906   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 108/120
	I0410 21:57:25.513084   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 109/120
	I0410 21:57:26.515561   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 110/120
	I0410 21:57:27.517184   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 111/120
	I0410 21:57:28.518712   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 112/120
	I0410 21:57:29.519975   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 113/120
	I0410 21:57:30.521445   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 114/120
	I0410 21:57:31.523655   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 115/120
	I0410 21:57:32.525204   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 116/120
	I0410 21:57:33.527160   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 117/120
	I0410 21:57:34.528458   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 118/120
	I0410 21:57:35.529624   30512 main.go:141] libmachine: (ha-150873-m04) Waiting for machine to stop 119/120
	I0410 21:57:36.530456   30512 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0410 21:57:36.530521   30512 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0410 21:57:36.532321   30512 out.go:177] 
	W0410 21:57:36.533878   30512 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0410 21:57:36.533900   30512 out.go:239] * 
	* 
	W0410 21:57:36.536033   30512 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0410 21:57:36.537345   30512 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-150873 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-150873 status -v=7 --alsologtostderr: exit status 3 (18.99458151s)

                                                
                                                
-- stdout --
	ha-150873
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-150873-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-150873-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0410 21:57:36.596305   30942 out.go:291] Setting OutFile to fd 1 ...
	I0410 21:57:36.596612   30942 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 21:57:36.596626   30942 out.go:304] Setting ErrFile to fd 2...
	I0410 21:57:36.596633   30942 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 21:57:36.596890   30942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 21:57:36.597073   30942 out.go:298] Setting JSON to false
	I0410 21:57:36.597102   30942 mustload.go:65] Loading cluster: ha-150873
	I0410 21:57:36.597230   30942 notify.go:220] Checking for updates...
	I0410 21:57:36.597580   30942 config.go:182] Loaded profile config "ha-150873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 21:57:36.597597   30942 status.go:255] checking status of ha-150873 ...
	I0410 21:57:36.597958   30942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:57:36.598018   30942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:57:36.617974   30942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39437
	I0410 21:57:36.618446   30942 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:57:36.619212   30942 main.go:141] libmachine: Using API Version  1
	I0410 21:57:36.619240   30942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:57:36.619741   30942 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:57:36.620020   30942 main.go:141] libmachine: (ha-150873) Calling .GetState
	I0410 21:57:36.621655   30942 status.go:330] ha-150873 host status = "Running" (err=<nil>)
	I0410 21:57:36.621673   30942 host.go:66] Checking if "ha-150873" exists ...
	I0410 21:57:36.621998   30942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:57:36.622038   30942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:57:36.637666   30942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41871
	I0410 21:57:36.638091   30942 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:57:36.638502   30942 main.go:141] libmachine: Using API Version  1
	I0410 21:57:36.638523   30942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:57:36.638855   30942 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:57:36.639036   30942 main.go:141] libmachine: (ha-150873) Calling .GetIP
	I0410 21:57:36.642106   30942 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:57:36.642506   30942 main.go:141] libmachine: (ha-150873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:93:6b", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:43:02 +0000 UTC Type:0 Mac:52:54:00:50:93:6b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-150873 Clientid:01:52:54:00:50:93:6b}
	I0410 21:57:36.642543   30942 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined IP address 192.168.39.12 and MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:57:36.642731   30942 host.go:66] Checking if "ha-150873" exists ...
	I0410 21:57:36.643013   30942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:57:36.643053   30942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:57:36.657570   30942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46693
	I0410 21:57:36.658010   30942 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:57:36.658437   30942 main.go:141] libmachine: Using API Version  1
	I0410 21:57:36.658457   30942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:57:36.658766   30942 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:57:36.658939   30942 main.go:141] libmachine: (ha-150873) Calling .DriverName
	I0410 21:57:36.659114   30942 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0410 21:57:36.659135   30942 main.go:141] libmachine: (ha-150873) Calling .GetSSHHostname
	I0410 21:57:36.662010   30942 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:57:36.662445   30942 main.go:141] libmachine: (ha-150873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:93:6b", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:43:02 +0000 UTC Type:0 Mac:52:54:00:50:93:6b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-150873 Clientid:01:52:54:00:50:93:6b}
	I0410 21:57:36.662466   30942 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined IP address 192.168.39.12 and MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:57:36.662675   30942 main.go:141] libmachine: (ha-150873) Calling .GetSSHPort
	I0410 21:57:36.662866   30942 main.go:141] libmachine: (ha-150873) Calling .GetSSHKeyPath
	I0410 21:57:36.662999   30942 main.go:141] libmachine: (ha-150873) Calling .GetSSHUsername
	I0410 21:57:36.663142   30942 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/ha-150873/id_rsa Username:docker}
	I0410 21:57:36.749151   30942 ssh_runner.go:195] Run: systemctl --version
	I0410 21:57:36.759598   30942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 21:57:36.779687   30942 kubeconfig.go:125] found "ha-150873" server: "https://192.168.39.254:8443"
	I0410 21:57:36.779729   30942 api_server.go:166] Checking apiserver status ...
	I0410 21:57:36.779770   30942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 21:57:36.798664   30942 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4269/cgroup
	W0410 21:57:36.812253   30942 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4269/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0410 21:57:36.812313   30942 ssh_runner.go:195] Run: ls
	I0410 21:57:36.817629   30942 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0410 21:57:36.822257   30942 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0410 21:57:36.822281   30942 status.go:422] ha-150873 apiserver status = Running (err=<nil>)
	I0410 21:57:36.822304   30942 status.go:257] ha-150873 status: &{Name:ha-150873 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0410 21:57:36.822336   30942 status.go:255] checking status of ha-150873-m02 ...
	I0410 21:57:36.822730   30942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:57:36.822774   30942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:57:36.837239   30942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34531
	I0410 21:57:36.837655   30942 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:57:36.838107   30942 main.go:141] libmachine: Using API Version  1
	I0410 21:57:36.838126   30942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:57:36.838459   30942 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:57:36.838634   30942 main.go:141] libmachine: (ha-150873-m02) Calling .GetState
	I0410 21:57:36.840166   30942 status.go:330] ha-150873-m02 host status = "Running" (err=<nil>)
	I0410 21:57:36.840179   30942 host.go:66] Checking if "ha-150873-m02" exists ...
	I0410 21:57:36.840559   30942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:57:36.840596   30942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:57:36.855553   30942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46303
	I0410 21:57:36.855988   30942 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:57:36.856486   30942 main.go:141] libmachine: Using API Version  1
	I0410 21:57:36.856508   30942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:57:36.856779   30942 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:57:36.856946   30942 main.go:141] libmachine: (ha-150873-m02) Calling .GetIP
	I0410 21:57:36.859993   30942 main.go:141] libmachine: (ha-150873-m02) DBG | domain ha-150873-m02 has defined MAC address 52:54:00:d7:2a:e0 in network mk-ha-150873
	I0410 21:57:36.860509   30942 main.go:141] libmachine: (ha-150873-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:2a:e0", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:48:39 +0000 UTC Type:0 Mac:52:54:00:d7:2a:e0 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:ha-150873-m02 Clientid:01:52:54:00:d7:2a:e0}
	I0410 21:57:36.860534   30942 main.go:141] libmachine: (ha-150873-m02) DBG | domain ha-150873-m02 has defined IP address 192.168.39.213 and MAC address 52:54:00:d7:2a:e0 in network mk-ha-150873
	I0410 21:57:36.860688   30942 host.go:66] Checking if "ha-150873-m02" exists ...
	I0410 21:57:36.860989   30942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:57:36.861027   30942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:57:36.876757   30942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36681
	I0410 21:57:36.877158   30942 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:57:36.877547   30942 main.go:141] libmachine: Using API Version  1
	I0410 21:57:36.877565   30942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:57:36.877865   30942 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:57:36.878048   30942 main.go:141] libmachine: (ha-150873-m02) Calling .DriverName
	I0410 21:57:36.878234   30942 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0410 21:57:36.878253   30942 main.go:141] libmachine: (ha-150873-m02) Calling .GetSSHHostname
	I0410 21:57:36.881046   30942 main.go:141] libmachine: (ha-150873-m02) DBG | domain ha-150873-m02 has defined MAC address 52:54:00:d7:2a:e0 in network mk-ha-150873
	I0410 21:57:36.881493   30942 main.go:141] libmachine: (ha-150873-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:2a:e0", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:48:39 +0000 UTC Type:0 Mac:52:54:00:d7:2a:e0 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:ha-150873-m02 Clientid:01:52:54:00:d7:2a:e0}
	I0410 21:57:36.881527   30942 main.go:141] libmachine: (ha-150873-m02) DBG | domain ha-150873-m02 has defined IP address 192.168.39.213 and MAC address 52:54:00:d7:2a:e0 in network mk-ha-150873
	I0410 21:57:36.881706   30942 main.go:141] libmachine: (ha-150873-m02) Calling .GetSSHPort
	I0410 21:57:36.881867   30942 main.go:141] libmachine: (ha-150873-m02) Calling .GetSSHKeyPath
	I0410 21:57:36.882006   30942 main.go:141] libmachine: (ha-150873-m02) Calling .GetSSHUsername
	I0410 21:57:36.882134   30942 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/ha-150873-m02/id_rsa Username:docker}
	I0410 21:57:36.972290   30942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 21:57:36.992102   30942 kubeconfig.go:125] found "ha-150873" server: "https://192.168.39.254:8443"
	I0410 21:57:36.992142   30942 api_server.go:166] Checking apiserver status ...
	I0410 21:57:36.992194   30942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 21:57:37.006634   30942 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/3683/cgroup
	W0410 21:57:37.016923   30942 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/3683/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0410 21:57:37.016981   30942 ssh_runner.go:195] Run: ls
	I0410 21:57:37.021554   30942 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0410 21:57:37.026586   30942 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0410 21:57:37.026610   30942 status.go:422] ha-150873-m02 apiserver status = Running (err=<nil>)
	I0410 21:57:37.026618   30942 status.go:257] ha-150873-m02 status: &{Name:ha-150873-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0410 21:57:37.026638   30942 status.go:255] checking status of ha-150873-m04 ...
	I0410 21:57:37.026940   30942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:57:37.026978   30942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:57:37.041560   30942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44443
	I0410 21:57:37.041995   30942 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:57:37.042462   30942 main.go:141] libmachine: Using API Version  1
	I0410 21:57:37.042481   30942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:57:37.042778   30942 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:57:37.042968   30942 main.go:141] libmachine: (ha-150873-m04) Calling .GetState
	I0410 21:57:37.044484   30942 status.go:330] ha-150873-m04 host status = "Running" (err=<nil>)
	I0410 21:57:37.044501   30942 host.go:66] Checking if "ha-150873-m04" exists ...
	I0410 21:57:37.044784   30942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:57:37.044825   30942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:57:37.059102   30942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33297
	I0410 21:57:37.059577   30942 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:57:37.060155   30942 main.go:141] libmachine: Using API Version  1
	I0410 21:57:37.060177   30942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:57:37.060526   30942 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:57:37.060748   30942 main.go:141] libmachine: (ha-150873-m04) Calling .GetIP
	I0410 21:57:37.063484   30942 main.go:141] libmachine: (ha-150873-m04) DBG | domain ha-150873-m04 has defined MAC address 52:54:00:56:5f:bd in network mk-ha-150873
	I0410 21:57:37.063880   30942 main.go:141] libmachine: (ha-150873-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:5f:bd", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:55:03 +0000 UTC Type:0 Mac:52:54:00:56:5f:bd Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:ha-150873-m04 Clientid:01:52:54:00:56:5f:bd}
	I0410 21:57:37.063911   30942 main.go:141] libmachine: (ha-150873-m04) DBG | domain ha-150873-m04 has defined IP address 192.168.39.144 and MAC address 52:54:00:56:5f:bd in network mk-ha-150873
	I0410 21:57:37.064018   30942 host.go:66] Checking if "ha-150873-m04" exists ...
	I0410 21:57:37.064376   30942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:57:37.064431   30942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:57:37.079174   30942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37479
	I0410 21:57:37.079584   30942 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:57:37.080032   30942 main.go:141] libmachine: Using API Version  1
	I0410 21:57:37.080054   30942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:57:37.080339   30942 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:57:37.080558   30942 main.go:141] libmachine: (ha-150873-m04) Calling .DriverName
	I0410 21:57:37.080745   30942 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0410 21:57:37.080765   30942 main.go:141] libmachine: (ha-150873-m04) Calling .GetSSHHostname
	I0410 21:57:37.083754   30942 main.go:141] libmachine: (ha-150873-m04) DBG | domain ha-150873-m04 has defined MAC address 52:54:00:56:5f:bd in network mk-ha-150873
	I0410 21:57:37.084202   30942 main.go:141] libmachine: (ha-150873-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:5f:bd", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:55:03 +0000 UTC Type:0 Mac:52:54:00:56:5f:bd Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:ha-150873-m04 Clientid:01:52:54:00:56:5f:bd}
	I0410 21:57:37.084232   30942 main.go:141] libmachine: (ha-150873-m04) DBG | domain ha-150873-m04 has defined IP address 192.168.39.144 and MAC address 52:54:00:56:5f:bd in network mk-ha-150873
	I0410 21:57:37.084415   30942 main.go:141] libmachine: (ha-150873-m04) Calling .GetSSHPort
	I0410 21:57:37.084560   30942 main.go:141] libmachine: (ha-150873-m04) Calling .GetSSHKeyPath
	I0410 21:57:37.084718   30942 main.go:141] libmachine: (ha-150873-m04) Calling .GetSSHUsername
	I0410 21:57:37.084824   30942 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/ha-150873-m04/id_rsa Username:docker}
	W0410 21:57:55.532591   30942 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.144:22: connect: no route to host
	W0410 21:57:55.532669   30942 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.144:22: connect: no route to host
	E0410 21:57:55.532687   30942 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.144:22: connect: no route to host
	I0410 21:57:55.532700   30942 status.go:257] ha-150873-m04 status: &{Name:ha-150873-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0410 21:57:55.532723   30942 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.144:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-150873 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-150873 -n ha-150873
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-150873 logs -n 25: (1.864820188s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| ssh     | ha-150873 ssh -n ha-150873-m02 sudo cat                                         | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | /home/docker/cp-test_ha-150873-m03_ha-150873-m02.txt                            |           |         |                |                     |                     |
	| cp      | ha-150873 cp ha-150873-m03:/home/docker/cp-test.txt                             | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | ha-150873-m04:/home/docker/cp-test_ha-150873-m03_ha-150873-m04.txt              |           |         |                |                     |                     |
	| ssh     | ha-150873 ssh -n                                                                | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | ha-150873-m03 sudo cat                                                          |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |                |                     |                     |
	| ssh     | ha-150873 ssh -n ha-150873-m04 sudo cat                                         | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | /home/docker/cp-test_ha-150873-m03_ha-150873-m04.txt                            |           |         |                |                     |                     |
	| cp      | ha-150873 cp testdata/cp-test.txt                                               | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | ha-150873-m04:/home/docker/cp-test.txt                                          |           |         |                |                     |                     |
	| ssh     | ha-150873 ssh -n                                                                | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | ha-150873-m04 sudo cat                                                          |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |                |                     |                     |
	| cp      | ha-150873 cp ha-150873-m04:/home/docker/cp-test.txt                             | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile947152864/001/cp-test_ha-150873-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-150873 ssh -n                                                                | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | ha-150873-m04 sudo cat                                                          |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |                |                     |                     |
	| cp      | ha-150873 cp ha-150873-m04:/home/docker/cp-test.txt                             | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | ha-150873:/home/docker/cp-test_ha-150873-m04_ha-150873.txt                      |           |         |                |                     |                     |
	| ssh     | ha-150873 ssh -n                                                                | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | ha-150873-m04 sudo cat                                                          |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |                |                     |                     |
	| ssh     | ha-150873 ssh -n ha-150873 sudo cat                                             | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | /home/docker/cp-test_ha-150873-m04_ha-150873.txt                                |           |         |                |                     |                     |
	| cp      | ha-150873 cp ha-150873-m04:/home/docker/cp-test.txt                             | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | ha-150873-m02:/home/docker/cp-test_ha-150873-m04_ha-150873-m02.txt              |           |         |                |                     |                     |
	| ssh     | ha-150873 ssh -n                                                                | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | ha-150873-m04 sudo cat                                                          |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |                |                     |                     |
	| ssh     | ha-150873 ssh -n ha-150873-m02 sudo cat                                         | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | /home/docker/cp-test_ha-150873-m04_ha-150873-m02.txt                            |           |         |                |                     |                     |
	| cp      | ha-150873 cp ha-150873-m04:/home/docker/cp-test.txt                             | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | ha-150873-m03:/home/docker/cp-test_ha-150873-m04_ha-150873-m03.txt              |           |         |                |                     |                     |
	| ssh     | ha-150873 ssh -n                                                                | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | ha-150873-m04 sudo cat                                                          |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |                |                     |                     |
	| ssh     | ha-150873 ssh -n ha-150873-m03 sudo cat                                         | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | /home/docker/cp-test_ha-150873-m04_ha-150873-m03.txt                            |           |         |                |                     |                     |
	| node    | ha-150873 node stop m02 -v=7                                                    | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:48 UTC |
	|         | --alsologtostderr                                                               |           |         |                |                     |                     |
	| node    | ha-150873 node start m02 -v=7                                                   | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:48 UTC | 10 Apr 24 21:49 UTC |
	|         | --alsologtostderr                                                               |           |         |                |                     |                     |
	| node    | list -p ha-150873 -v=7                                                          | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:49 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |                |                     |                     |
	| stop    | -p ha-150873 -v=7                                                               | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:49 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |                |                     |                     |
	| start   | -p ha-150873 --wait=true -v=7                                                   | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:51 UTC | 10 Apr 24 21:55 UTC |
	|         | --alsologtostderr                                                               |           |         |                |                     |                     |
	| node    | list -p ha-150873                                                               | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:55 UTC |                     |
	| node    | ha-150873 node delete m03 -v=7                                                  | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:55 UTC | 10 Apr 24 21:55 UTC |
	|         | --alsologtostderr                                                               |           |         |                |                     |                     |
	| stop    | ha-150873 stop -v=7                                                             | ha-150873 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:55 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |                |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/10 21:51:16
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0410 21:51:16.039678   28838 out.go:291] Setting OutFile to fd 1 ...
	I0410 21:51:16.039937   28838 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 21:51:16.039947   28838 out.go:304] Setting ErrFile to fd 2...
	I0410 21:51:16.039952   28838 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 21:51:16.040142   28838 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 21:51:16.040733   28838 out.go:298] Setting JSON to false
	I0410 21:51:16.041631   28838 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2018,"bootTime":1712783858,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0410 21:51:16.041692   28838 start.go:139] virtualization: kvm guest
	I0410 21:51:16.044134   28838 out.go:177] * [ha-150873] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0410 21:51:16.046561   28838 out.go:177]   - MINIKUBE_LOCATION=18610
	I0410 21:51:16.046616   28838 notify.go:220] Checking for updates...
	I0410 21:51:16.049355   28838 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0410 21:51:16.050728   28838 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 21:51:16.052123   28838 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 21:51:16.053419   28838 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0410 21:51:16.054853   28838 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0410 21:51:16.056599   28838 config.go:182] Loaded profile config "ha-150873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 21:51:16.056683   28838 driver.go:392] Setting default libvirt URI to qemu:///system
	I0410 21:51:16.057148   28838 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:51:16.057191   28838 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:51:16.074305   28838 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36401
	I0410 21:51:16.074713   28838 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:51:16.075369   28838 main.go:141] libmachine: Using API Version  1
	I0410 21:51:16.075392   28838 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:51:16.075799   28838 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:51:16.076059   28838 main.go:141] libmachine: (ha-150873) Calling .DriverName
	I0410 21:51:16.113343   28838 out.go:177] * Using the kvm2 driver based on existing profile
	I0410 21:51:16.114922   28838 start.go:297] selected driver: kvm2
	I0410 21:51:16.114945   28838 start.go:901] validating driver "kvm2" against &{Name:ha-150873 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.29.3 ClusterName:ha-150873 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.213 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.143 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.144 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 21:51:16.115142   28838 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0410 21:51:16.115495   28838 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 21:51:16.115572   28838 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18610-5679/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0410 21:51:16.130178   28838 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0410 21:51:16.130843   28838 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 21:51:16.130924   28838 cni.go:84] Creating CNI manager for ""
	I0410 21:51:16.130939   28838 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0410 21:51:16.131002   28838 start.go:340] cluster config:
	{Name:ha-150873 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-150873 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.213 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.143 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.144 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 21:51:16.131148   28838 iso.go:125] acquiring lock: {Name:mk5a268a8a4dbf02629446a098c1de60574dce10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 21:51:16.133237   28838 out.go:177] * Starting "ha-150873" primary control-plane node in "ha-150873" cluster
	I0410 21:51:16.134680   28838 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 21:51:16.134723   28838 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0410 21:51:16.134730   28838 cache.go:56] Caching tarball of preloaded images
	I0410 21:51:16.134867   28838 preload.go:173] Found /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0410 21:51:16.134884   28838 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0410 21:51:16.135003   28838 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/config.json ...
	I0410 21:51:16.135241   28838 start.go:360] acquireMachinesLock for ha-150873: {Name:mkcfcfa8b01b63726303cd37d33f01d452118635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0410 21:51:16.135287   28838 start.go:364] duration metric: took 29.911µs to acquireMachinesLock for "ha-150873"
	I0410 21:51:16.135306   28838 start.go:96] Skipping create...Using existing machine configuration
	I0410 21:51:16.135317   28838 fix.go:54] fixHost starting: 
	I0410 21:51:16.135589   28838 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:51:16.135618   28838 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:51:16.150140   28838 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33911
	I0410 21:51:16.150628   28838 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:51:16.151233   28838 main.go:141] libmachine: Using API Version  1
	I0410 21:51:16.151259   28838 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:51:16.151584   28838 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:51:16.151798   28838 main.go:141] libmachine: (ha-150873) Calling .DriverName
	I0410 21:51:16.152014   28838 main.go:141] libmachine: (ha-150873) Calling .GetState
	I0410 21:51:16.153793   28838 fix.go:112] recreateIfNeeded on ha-150873: state=Running err=<nil>
	W0410 21:51:16.153814   28838 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 21:51:16.156157   28838 out.go:177] * Updating the running kvm2 "ha-150873" VM ...
	I0410 21:51:16.157559   28838 machine.go:94] provisionDockerMachine start ...
	I0410 21:51:16.157579   28838 main.go:141] libmachine: (ha-150873) Calling .DriverName
	I0410 21:51:16.157778   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHHostname
	I0410 21:51:16.160501   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:51:16.161299   28838 main.go:141] libmachine: (ha-150873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:93:6b", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:43:02 +0000 UTC Type:0 Mac:52:54:00:50:93:6b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-150873 Clientid:01:52:54:00:50:93:6b}
	I0410 21:51:16.161334   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined IP address 192.168.39.12 and MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:51:16.161452   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHPort
	I0410 21:51:16.161646   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHKeyPath
	I0410 21:51:16.161843   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHKeyPath
	I0410 21:51:16.162039   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHUsername
	I0410 21:51:16.162238   28838 main.go:141] libmachine: Using SSH client type: native
	I0410 21:51:16.162464   28838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0410 21:51:16.162486   28838 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 21:51:16.270248   28838 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-150873
	
	I0410 21:51:16.270284   28838 main.go:141] libmachine: (ha-150873) Calling .GetMachineName
	I0410 21:51:16.270567   28838 buildroot.go:166] provisioning hostname "ha-150873"
	I0410 21:51:16.270591   28838 main.go:141] libmachine: (ha-150873) Calling .GetMachineName
	I0410 21:51:16.270792   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHHostname
	I0410 21:51:16.273376   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:51:16.273728   28838 main.go:141] libmachine: (ha-150873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:93:6b", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:43:02 +0000 UTC Type:0 Mac:52:54:00:50:93:6b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-150873 Clientid:01:52:54:00:50:93:6b}
	I0410 21:51:16.273762   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined IP address 192.168.39.12 and MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:51:16.273912   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHPort
	I0410 21:51:16.274095   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHKeyPath
	I0410 21:51:16.274247   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHKeyPath
	I0410 21:51:16.274386   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHUsername
	I0410 21:51:16.274543   28838 main.go:141] libmachine: Using SSH client type: native
	I0410 21:51:16.274738   28838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0410 21:51:16.274751   28838 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-150873 && echo "ha-150873" | sudo tee /etc/hostname
	I0410 21:51:16.402765   28838 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-150873
	
	I0410 21:51:16.402807   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHHostname
	I0410 21:51:16.405799   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:51:16.406244   28838 main.go:141] libmachine: (ha-150873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:93:6b", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:43:02 +0000 UTC Type:0 Mac:52:54:00:50:93:6b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-150873 Clientid:01:52:54:00:50:93:6b}
	I0410 21:51:16.406278   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined IP address 192.168.39.12 and MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:51:16.406535   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHPort
	I0410 21:51:16.406717   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHKeyPath
	I0410 21:51:16.406876   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHKeyPath
	I0410 21:51:16.406989   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHUsername
	I0410 21:51:16.407112   28838 main.go:141] libmachine: Using SSH client type: native
	I0410 21:51:16.407263   28838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0410 21:51:16.407289   28838 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-150873' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-150873/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-150873' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 21:51:16.513016   28838 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 21:51:16.513041   28838 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 21:51:16.513078   28838 buildroot.go:174] setting up certificates
	I0410 21:51:16.513087   28838 provision.go:84] configureAuth start
	I0410 21:51:16.513098   28838 main.go:141] libmachine: (ha-150873) Calling .GetMachineName
	I0410 21:51:16.513349   28838 main.go:141] libmachine: (ha-150873) Calling .GetIP
	I0410 21:51:16.516046   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:51:16.516537   28838 main.go:141] libmachine: (ha-150873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:93:6b", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:43:02 +0000 UTC Type:0 Mac:52:54:00:50:93:6b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-150873 Clientid:01:52:54:00:50:93:6b}
	I0410 21:51:16.516567   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined IP address 192.168.39.12 and MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:51:16.516718   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHHostname
	I0410 21:51:16.519025   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:51:16.519424   28838 main.go:141] libmachine: (ha-150873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:93:6b", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:43:02 +0000 UTC Type:0 Mac:52:54:00:50:93:6b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-150873 Clientid:01:52:54:00:50:93:6b}
	I0410 21:51:16.519528   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined IP address 192.168.39.12 and MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:51:16.519620   28838 provision.go:143] copyHostCerts
	I0410 21:51:16.519644   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 21:51:16.519687   28838 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 21:51:16.519695   28838 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 21:51:16.519769   28838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 21:51:16.519836   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 21:51:16.519861   28838 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 21:51:16.519871   28838 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 21:51:16.519909   28838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 21:51:16.519972   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 21:51:16.519989   28838 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 21:51:16.519996   28838 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 21:51:16.520038   28838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 21:51:16.520101   28838 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.ha-150873 san=[127.0.0.1 192.168.39.12 ha-150873 localhost minikube]
	I0410 21:51:16.765692   28838 provision.go:177] copyRemoteCerts
	I0410 21:51:16.765756   28838 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 21:51:16.765784   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHHostname
	I0410 21:51:16.768437   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:51:16.768835   28838 main.go:141] libmachine: (ha-150873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:93:6b", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:43:02 +0000 UTC Type:0 Mac:52:54:00:50:93:6b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-150873 Clientid:01:52:54:00:50:93:6b}
	I0410 21:51:16.768865   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined IP address 192.168.39.12 and MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:51:16.769073   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHPort
	I0410 21:51:16.769275   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHKeyPath
	I0410 21:51:16.769451   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHUsername
	I0410 21:51:16.769574   28838 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/ha-150873/id_rsa Username:docker}
	I0410 21:51:16.854435   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0410 21:51:16.854510   28838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 21:51:16.888051   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0410 21:51:16.888121   28838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0410 21:51:16.915829   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0410 21:51:16.915902   28838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0410 21:51:16.950373   28838 provision.go:87] duration metric: took 437.27063ms to configureAuth
	I0410 21:51:16.950403   28838 buildroot.go:189] setting minikube options for container-runtime
	I0410 21:51:16.950680   28838 config.go:182] Loaded profile config "ha-150873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 21:51:16.950761   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHHostname
	I0410 21:51:16.953644   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:51:16.954038   28838 main.go:141] libmachine: (ha-150873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:93:6b", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:43:02 +0000 UTC Type:0 Mac:52:54:00:50:93:6b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-150873 Clientid:01:52:54:00:50:93:6b}
	I0410 21:51:16.954068   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined IP address 192.168.39.12 and MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:51:16.954363   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHPort
	I0410 21:51:16.954654   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHKeyPath
	I0410 21:51:16.954828   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHKeyPath
	I0410 21:51:16.954991   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHUsername
	I0410 21:51:16.955198   28838 main.go:141] libmachine: Using SSH client type: native
	I0410 21:51:16.955430   28838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0410 21:51:16.955459   28838 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 21:52:47.944047   28838 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 21:52:47.944110   28838 machine.go:97] duration metric: took 1m31.786528876s to provisionDockerMachine
	I0410 21:52:47.944155   28838 start.go:293] postStartSetup for "ha-150873" (driver="kvm2")
	I0410 21:52:47.944176   28838 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 21:52:47.944205   28838 main.go:141] libmachine: (ha-150873) Calling .DriverName
	I0410 21:52:47.944587   28838 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 21:52:47.944615   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHHostname
	I0410 21:52:47.948065   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:52:47.948579   28838 main.go:141] libmachine: (ha-150873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:93:6b", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:43:02 +0000 UTC Type:0 Mac:52:54:00:50:93:6b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-150873 Clientid:01:52:54:00:50:93:6b}
	I0410 21:52:47.948607   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined IP address 192.168.39.12 and MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:52:47.948766   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHPort
	I0410 21:52:47.948975   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHKeyPath
	I0410 21:52:47.949122   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHUsername
	I0410 21:52:47.949251   28838 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/ha-150873/id_rsa Username:docker}
	I0410 21:52:48.032250   28838 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 21:52:48.036615   28838 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 21:52:48.036642   28838 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 21:52:48.036714   28838 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 21:52:48.036831   28838 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 21:52:48.036849   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> /etc/ssl/certs/130012.pem
	I0410 21:52:48.036974   28838 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 21:52:48.047416   28838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 21:52:48.074242   28838 start.go:296] duration metric: took 130.066467ms for postStartSetup
	I0410 21:52:48.074281   28838 main.go:141] libmachine: (ha-150873) Calling .DriverName
	I0410 21:52:48.074564   28838 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0410 21:52:48.074598   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHHostname
	I0410 21:52:48.077254   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:52:48.077733   28838 main.go:141] libmachine: (ha-150873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:93:6b", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:43:02 +0000 UTC Type:0 Mac:52:54:00:50:93:6b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-150873 Clientid:01:52:54:00:50:93:6b}
	I0410 21:52:48.077763   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined IP address 192.168.39.12 and MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:52:48.077940   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHPort
	I0410 21:52:48.078152   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHKeyPath
	I0410 21:52:48.078324   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHUsername
	I0410 21:52:48.078515   28838 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/ha-150873/id_rsa Username:docker}
	W0410 21:52:48.159939   28838 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0410 21:52:48.159963   28838 fix.go:56] duration metric: took 1m32.024652278s for fixHost
	I0410 21:52:48.159983   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHHostname
	I0410 21:52:48.162936   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:52:48.163362   28838 main.go:141] libmachine: (ha-150873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:93:6b", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:43:02 +0000 UTC Type:0 Mac:52:54:00:50:93:6b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-150873 Clientid:01:52:54:00:50:93:6b}
	I0410 21:52:48.163389   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined IP address 192.168.39.12 and MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:52:48.163481   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHPort
	I0410 21:52:48.163696   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHKeyPath
	I0410 21:52:48.163906   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHKeyPath
	I0410 21:52:48.164076   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHUsername
	I0410 21:52:48.164234   28838 main.go:141] libmachine: Using SSH client type: native
	I0410 21:52:48.164458   28838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0410 21:52:48.164471   28838 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 21:52:48.265486   28838 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712785968.249510720
	
	I0410 21:52:48.265522   28838 fix.go:216] guest clock: 1712785968.249510720
	I0410 21:52:48.265528   28838 fix.go:229] Guest: 2024-04-10 21:52:48.24951072 +0000 UTC Remote: 2024-04-10 21:52:48.159970823 +0000 UTC m=+92.167300342 (delta=89.539897ms)
	I0410 21:52:48.265546   28838 fix.go:200] guest clock delta is within tolerance: 89.539897ms
	I0410 21:52:48.265552   28838 start.go:83] releasing machines lock for "ha-150873", held for 1m32.130254676s
	I0410 21:52:48.265579   28838 main.go:141] libmachine: (ha-150873) Calling .DriverName
	I0410 21:52:48.265826   28838 main.go:141] libmachine: (ha-150873) Calling .GetIP
	I0410 21:52:48.268824   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:52:48.269208   28838 main.go:141] libmachine: (ha-150873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:93:6b", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:43:02 +0000 UTC Type:0 Mac:52:54:00:50:93:6b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-150873 Clientid:01:52:54:00:50:93:6b}
	I0410 21:52:48.269240   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined IP address 192.168.39.12 and MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:52:48.269387   28838 main.go:141] libmachine: (ha-150873) Calling .DriverName
	I0410 21:52:48.269938   28838 main.go:141] libmachine: (ha-150873) Calling .DriverName
	I0410 21:52:48.270169   28838 main.go:141] libmachine: (ha-150873) Calling .DriverName
	I0410 21:52:48.270304   28838 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 21:52:48.270341   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHHostname
	I0410 21:52:48.270454   28838 ssh_runner.go:195] Run: cat /version.json
	I0410 21:52:48.270469   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHHostname
	I0410 21:52:48.273381   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:52:48.273772   28838 main.go:141] libmachine: (ha-150873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:93:6b", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:43:02 +0000 UTC Type:0 Mac:52:54:00:50:93:6b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-150873 Clientid:01:52:54:00:50:93:6b}
	I0410 21:52:48.273794   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined IP address 192.168.39.12 and MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:52:48.273829   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:52:48.273974   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHPort
	I0410 21:52:48.274153   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHKeyPath
	I0410 21:52:48.274288   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHUsername
	I0410 21:52:48.274295   28838 main.go:141] libmachine: (ha-150873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:93:6b", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:43:02 +0000 UTC Type:0 Mac:52:54:00:50:93:6b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-150873 Clientid:01:52:54:00:50:93:6b}
	I0410 21:52:48.274309   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined IP address 192.168.39.12 and MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:52:48.274402   28838 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/ha-150873/id_rsa Username:docker}
	I0410 21:52:48.274504   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHPort
	I0410 21:52:48.274659   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHKeyPath
	I0410 21:52:48.274845   28838 main.go:141] libmachine: (ha-150873) Calling .GetSSHUsername
	I0410 21:52:48.274990   28838 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/ha-150873/id_rsa Username:docker}
	I0410 21:52:48.382606   28838 ssh_runner.go:195] Run: systemctl --version
	I0410 21:52:48.391251   28838 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 21:52:48.563103   28838 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 21:52:48.578133   28838 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 21:52:48.578199   28838 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 21:52:48.589055   28838 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0410 21:52:48.589077   28838 start.go:494] detecting cgroup driver to use...
	I0410 21:52:48.589134   28838 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 21:52:48.609417   28838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 21:52:48.631411   28838 docker.go:217] disabling cri-docker service (if available) ...
	I0410 21:52:48.631492   28838 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 21:52:48.647700   28838 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 21:52:48.663053   28838 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 21:52:48.835357   28838 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 21:52:48.996820   28838 docker.go:233] disabling docker service ...
	I0410 21:52:48.996880   28838 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 21:52:49.014085   28838 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 21:52:49.028496   28838 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 21:52:49.183406   28838 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 21:52:49.336954   28838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 21:52:49.352961   28838 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 21:52:49.374425   28838 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0410 21:52:49.374488   28838 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 21:52:49.387583   28838 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 21:52:49.387647   28838 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 21:52:49.399510   28838 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 21:52:49.411803   28838 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 21:52:49.424001   28838 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 21:52:49.437031   28838 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 21:52:49.448658   28838 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 21:52:49.459945   28838 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 21:52:49.471588   28838 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 21:52:49.482315   28838 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 21:52:49.493328   28838 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 21:52:49.656363   28838 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 21:52:51.740583   28838 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.084176589s)
	I0410 21:52:51.740613   28838 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 21:52:51.740666   28838 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 21:52:51.747208   28838 start.go:562] Will wait 60s for crictl version
	I0410 21:52:51.747302   28838 ssh_runner.go:195] Run: which crictl
	I0410 21:52:51.751869   28838 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 21:52:51.793368   28838 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 21:52:51.793443   28838 ssh_runner.go:195] Run: crio --version
	I0410 21:52:51.826527   28838 ssh_runner.go:195] Run: crio --version
	I0410 21:52:51.861008   28838 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0410 21:52:51.863027   28838 main.go:141] libmachine: (ha-150873) Calling .GetIP
	I0410 21:52:51.866193   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:52:51.866581   28838 main.go:141] libmachine: (ha-150873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:93:6b", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:43:02 +0000 UTC Type:0 Mac:52:54:00:50:93:6b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-150873 Clientid:01:52:54:00:50:93:6b}
	I0410 21:52:51.866607   28838 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined IP address 192.168.39.12 and MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:52:51.866842   28838 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0410 21:52:51.871871   28838 kubeadm.go:877] updating cluster {Name:ha-150873 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:ha-150873 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.213 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.143 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.144 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 21:52:51.871995   28838 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 21:52:51.872035   28838 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 21:52:51.919789   28838 crio.go:514] all images are preloaded for cri-o runtime.
	I0410 21:52:51.919815   28838 crio.go:433] Images already preloaded, skipping extraction
	I0410 21:52:51.919868   28838 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 21:52:51.963846   28838 crio.go:514] all images are preloaded for cri-o runtime.
	I0410 21:52:51.963868   28838 cache_images.go:84] Images are preloaded, skipping loading
	I0410 21:52:51.963876   28838 kubeadm.go:928] updating node { 192.168.39.12 8443 v1.29.3 crio true true} ...
	I0410 21:52:51.963962   28838 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-150873 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.12
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-150873 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 21:52:51.964020   28838 ssh_runner.go:195] Run: crio config
	I0410 21:52:52.022155   28838 cni.go:84] Creating CNI manager for ""
	I0410 21:52:52.022176   28838 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0410 21:52:52.022186   28838 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 21:52:52.022206   28838 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.12 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-150873 NodeName:ha-150873 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0410 21:52:52.022333   28838 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.12
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-150873"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.12
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.12"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 21:52:52.022360   28838 kube-vip.go:111] generating kube-vip config ...
	I0410 21:52:52.022398   28838 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0410 21:52:52.035262   28838 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0410 21:52:52.035388   28838 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0410 21:52:52.035439   28838 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0410 21:52:52.045809   28838 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 21:52:52.045878   28838 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0410 21:52:52.056106   28838 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0410 21:52:52.073761   28838 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0410 21:52:52.093533   28838 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0410 21:52:52.113639   28838 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0410 21:52:52.133451   28838 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0410 21:52:52.139130   28838 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 21:52:52.298795   28838 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 21:52:52.317013   28838 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873 for IP: 192.168.39.12
	I0410 21:52:52.317035   28838 certs.go:194] generating shared ca certs ...
	I0410 21:52:52.317049   28838 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 21:52:52.317207   28838 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 21:52:52.317268   28838 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 21:52:52.317288   28838 certs.go:256] generating profile certs ...
	I0410 21:52:52.317381   28838 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/client.key
	I0410 21:52:52.317411   28838 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/apiserver.key.434639d3
	I0410 21:52:52.317431   28838 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/apiserver.crt.434639d3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.12 192.168.39.213 192.168.39.143 192.168.39.254]
	I0410 21:52:52.708796   28838 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/apiserver.crt.434639d3 ...
	I0410 21:52:52.708829   28838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/apiserver.crt.434639d3: {Name:mk1501dc67fd7c8d8a733778ec51a67d98f8dd6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 21:52:52.709020   28838 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/apiserver.key.434639d3 ...
	I0410 21:52:52.709039   28838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/apiserver.key.434639d3: {Name:mk57920fc7ebe91730f5e8058b009a75614e19dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 21:52:52.709139   28838 certs.go:381] copying /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/apiserver.crt.434639d3 -> /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/apiserver.crt
	I0410 21:52:52.709302   28838 certs.go:385] copying /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/apiserver.key.434639d3 -> /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/apiserver.key
	I0410 21:52:52.709457   28838 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/proxy-client.key
	I0410 21:52:52.709474   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0410 21:52:52.709491   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0410 21:52:52.709507   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0410 21:52:52.709526   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0410 21:52:52.709542   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0410 21:52:52.709558   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0410 21:52:52.709574   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0410 21:52:52.709589   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0410 21:52:52.709688   28838 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 21:52:52.709768   28838 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 21:52:52.709783   28838 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 21:52:52.709818   28838 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 21:52:52.709851   28838 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 21:52:52.709883   28838 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 21:52:52.709943   28838 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 21:52:52.710000   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> /usr/share/ca-certificates/130012.pem
	I0410 21:52:52.710029   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0410 21:52:52.710045   28838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem -> /usr/share/ca-certificates/13001.pem
	I0410 21:52:52.710547   28838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 21:52:52.739805   28838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 21:52:52.767600   28838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 21:52:52.794794   28838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 21:52:52.822195   28838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0410 21:52:52.849264   28838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0410 21:52:52.876553   28838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 21:52:52.905449   28838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/ha-150873/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0410 21:52:52.935719   28838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 21:52:52.965741   28838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 21:52:52.993202   28838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 21:52:53.019601   28838 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 21:52:53.037482   28838 ssh_runner.go:195] Run: openssl version
	I0410 21:52:53.043362   28838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 21:52:53.054582   28838 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 21:52:53.059343   28838 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 21:52:53.059401   28838 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 21:52:53.065564   28838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 21:52:53.075485   28838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 21:52:53.087649   28838 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 21:52:53.092514   28838 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 21:52:53.092593   28838 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 21:52:53.098672   28838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 21:52:53.108680   28838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 21:52:53.120151   28838 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 21:52:53.125691   28838 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 21:52:53.125746   28838 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 21:52:53.131792   28838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 21:52:53.143410   28838 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 21:52:53.148422   28838 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0410 21:52:53.155040   28838 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0410 21:52:53.161092   28838 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0410 21:52:53.167091   28838 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0410 21:52:53.173144   28838 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0410 21:52:53.179262   28838 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0410 21:52:53.185349   28838 kubeadm.go:391] StartCluster: {Name:ha-150873 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-150873 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.213 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.143 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.144 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 21:52:53.185461   28838 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 21:52:53.185544   28838 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 21:52:53.230491   28838 cri.go:89] found id: "9eed089604ddb2adc879d9a9093f33fb8fdae41b062d71837f171fc366523b90"
	I0410 21:52:53.230515   28838 cri.go:89] found id: "1cfc2b9c051242d80ba7ad77f48c93b0c05cad4b86762bad6a1d854c33f6f32c"
	I0410 21:52:53.230523   28838 cri.go:89] found id: "c0533fe1ed46a2b0635aaf8d09515a53eff6c3f8d37327d0c287cabdb47062d2"
	I0410 21:52:53.230527   28838 cri.go:89] found id: "5565984567f9b26d4eed3577b07de6834f5ef76975cf4e514b712d250b43da66"
	I0410 21:52:53.230530   28838 cri.go:89] found id: "fb2a3cd16e18f44024f6ab2f1fbc983d58ea0b2f8dbeb32ab81ec676fc72e330"
	I0410 21:52:53.230534   28838 cri.go:89] found id: "a801aece5216f7e138337b799c1d603457c75338bc2d81915b8a2438f4c87070"
	I0410 21:52:53.230538   28838 cri.go:89] found id: "98119aea5e81af5a68cfed4eb015bf0f2b686e5a50dc607aca3240ee2f835f49"
	I0410 21:52:53.230541   28838 cri.go:89] found id: "e35fb1c2a3e4755b04eca6fabf4b21e19e1b19765a53119054c85ec43b017196"
	I0410 21:52:53.230546   28838 cri.go:89] found id: "9b735e1f5e9943f9daf11c84d8a1ecb16928f47d7abdcf35ccb712f504af9482"
	I0410 21:52:53.230553   28838 cri.go:89] found id: "538656d928f393189cf0534187fb39b2c64bb730a3116504548fdc465be1ea0a"
	I0410 21:52:53.230562   28838 cri.go:89] found id: ""
	I0410 21:52:53.230625   28838 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 10 21:57:56 ha-150873 crio[3171]: time="2024-04-10 21:57:56.281931633Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:967394af8ee84d12844f4c1fe58d3268b52d608806a9bbe4d030c8f4fab95b20,PodSandboxId:ae41a3393e770f705bc90b8c611d2256537c88bd0fcdbe29edddc1e1954f28e0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712786022459649353,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-twk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebfddcc6-a190-4756-9096-1dc2cec68cf7,},Annotations:map[string]string{io.kubernetes.container.hash: e810504,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edff49423a0137fe750956ba320c3555c41762c96e4b52d61dd538f1387f3e8b,PodSandboxId:7b338618978c4b0d279cda17d5926927fe117a89d2af2a813a35395c3791c96b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712786020508110954,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05169d4b9723d694fde443a3079da775,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5843098e8f58e76b5e87b452629022743500b92173820f27b05241c46737470a,PodSandboxId:c3b45aeeff5a45390600af338dbb400459f46162f7f23f5596ca6a802f9f9b33,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712786011924871993,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-npbvn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ff96a1f-d71f-41b7-89ec-8cb7a94c0231,},Annotations:map[string]string{io.kubernetes.container.hash: ec06d454,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e235f69edc18857fcd2070c996c68b599ab46f71b62c95fcc7e720038bca5907,PodSandboxId:ad883c5504ae7e3d33e6f82af471e250dc687313b278e854c8c290bea79247ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712786010353918322,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6eff29f33f6e236015d4efe6b97593c,},Annotations:map[string]string{io.kubernetes.container.hash: eb0f98cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:097ab2b7a3b21e861478a5265978920652458bdb04e361253d82c88339bbf66a,PodSandboxId:de11001c92427cdbff07fc29c19039b1af5709c1f71a07ffc554492a46b5fed4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712786000463841729,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ebab890b99987fdef4351dcb63a481c,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:11b8446660febd3894b8ae348d19cb08dc586be0b366fe960017799e3ef498b9,PodSandboxId:90789bebad3c1838a125d7bfe5747cc1f67b3713764f68c2f4516d186196541a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712785996449330227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 033c01dc-895b-4eca-87b9-e5a8444c4c62,},Annotations:map[string]string{io.kubernetes.container.hash: 3e4e98cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:3934f37403fb0beca32412cd8af38217c3eaabcbd92daf292e726a56c1e6a666,PodSandboxId:9a4ff4c8cdaeb05bf27351e4ebc587695641cff2231e8fa428d7abf83e07cc07,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712785979114651690,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4k6ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82bf47-319e-444b-bb54-9f44b684bf06,},Annotations:map[string]string{io.kubernetes.container.hash: 25e1f47f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:3f65a763bc9e27d1d1cb7df78aaa507490cf2c0ef14a25459071556e5237bd19,PodSandboxId:90789bebad3c1838a125d7bfe5747cc1f67b3713764f68c2f4516d186196541a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712785979187621109,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 033c01dc-895b-4eca-87b9-e5a8444c4c62,},Annotations:map[string]string{io.kubernetes.container.hash: 3e4e98cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d63235fb
69e81b7ed849622e86a6ab34f47f6d81af7dfbce078caf844c937923,PodSandboxId:ad883c5504ae7e3d33e6f82af471e250dc687313b278e854c8c290bea79247ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712785978670292567,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6eff29f33f6e236015d4efe6b97593c,},Annotations:map[string]string{io.kubernetes.container.hash: eb0f98cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74a5b5b09ea11da219a8235e9f05f
e927caf625ef95cdbf9ddb867aa7bcddce,PodSandboxId:1d197657a29aa7b4f583e81c8633fcbfe6303b83f088706399a4781170e698ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712785978953180708,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v7npj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a44fe0-14c0-451f-b707-d129c6cb30d4,},Annotations:map[string]string{io.kubernetes.container.hash: bb0000a3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6931407fcaa90fbe7cac83ed492d072d4c9bb966e765cea62da8ff26da536b59,PodSandboxId:db6b3dc77ab9f87a6afb143347c0940716ca8a70e5967378ce9620c03baf38b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712785978833280294,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lv7pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea5fd9f-ee96-4394-bca6-29cbcf5ad31e,},Annotations:map[string]string{io.kubernetes.container.hash: 143c6295,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a7bcf4449817eb8338b07a8e5efe49e28bcc08775cb33f9491a54950a0f3757,PodSandboxId:ae41a3393e770f705bc90b8c611d2256537c88bd0fcdbe29edddc1e1954f28e0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712785978602127581,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-twk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebfddcc6-a
190-4756-9096-1dc2cec68cf7,},Annotations:map[string]string{io.kubernetes.container.hash: e810504,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30486e357194bcf533da86b9e1d1529c00dac6b511afebe2045eb8d0b254e33d,PodSandboxId:7b338618978c4b0d279cda17d5926927fe117a89d2af2a813a35395c3791c96b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712785978526775957,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05
169d4b9723d694fde443a3079da775,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd121dc8b073b74475d28df919aa7d986e22e9916bf717630de3c193f121d3bf,PodSandboxId:1426294dc588a4879a72d63f686a321e11f0043f50b3700c7d985354bedfe919,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712785978512134053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ead7f643006d5b17c362a714ce1716,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 4fdb0fc7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290a7180b1a6d450d47f0ee5459e99f09e131f3c4f6ff26fbab860c8133ae13e,PodSandboxId:e25cccd588a353348042677451b278c282a7f154e0ca5139a21c1e8d4396439d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712785978086673766,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb34a5c53d32d72142b8d7c7bfda2302,},Annotations:map[string]string{io.kubern
etes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14633f0c6042a07eb877fb35742fc7f78eaf5fc02579011e3f22392bd4705149,PodSandboxId:caa71de8a90bd8f405aa1d2b15a22b877e9efabdda5d3ab5654d3c60100c6f2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712785636537427350,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-npbvn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ff96a1f-d71f-41b7-89ec-8cb7a94c0231,},Annotations:map[string]string{io.kuberne
tes.container.hash: ec06d454,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eed089604ddb2adc879d9a9093f33fb8fdae41b062d71837f171fc366523b90,PodSandboxId:477c4e121d289241b04e5bcba6621e3c962c07d0df1a2d85195741c8508989da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712785425699209159,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lv7pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea5fd9f-ee96-4394-bca6-29cbcf5ad31e,},Annotations:map[string]string{io.kubernetes.container.hash: 143c6295,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0533fe1ed46a2b0635aaf8d09515a53eff6c3f8d37327d0c287cabdb47062d2,PodSandboxId:7dcb837166455521932d4cb9f6dc4f1c30c3bbb463ace91c9b170b13eaa35891,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712785425572440741,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-76f75df574-v7npj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a44fe0-14c0-451f-b707-d129c6cb30d4,},Annotations:map[string]string{io.kubernetes.container.hash: bb0000a3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb2a3cd16e18f44024f6ab2f1fbc983d58ea0b2f8dbeb32ab81ec676fc72e330,PodSandboxId:215d1fe94079dd52ffc980ec77268193f9b6d373850752af5c7718762a5429df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0a
cea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712785423376402271,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4k6ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82bf47-319e-444b-bb54-9f44b684bf06,},Annotations:map[string]string{io.kubernetes.container.hash: 25e1f47f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35fb1c2a3e4755b04eca6fabf4b21e19e1b19765a53119054c85ec43b017196,PodSandboxId:503f0fd82969812f24a9d05afabc98c944f7c8c319b5dd485703c8293c6cc2de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5
a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712785403622940875,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb34a5c53d32d72142b8d7c7bfda2302,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b735e1f5e9943f9daf11c84d8a1ecb16928f47d7abdcf35ccb712f504af9482,PodSandboxId:3fad80ebf2adf1ef57f94d98afe692626c0000f4a7a16f2cc6934600c687c563,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1712785403608929394,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ead7f643006d5b17c362a714ce1716,},Annotations:map[string]string{io.kubernetes.container.hash: 4fdb0fc7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c45dae32-f7c5-43af-8c1b-289aa379b1bb name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:57:56 ha-150873 crio[3171]: time="2024-04-10 21:57:56.340838380Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a09cc974-3816-4258-af68-af10ea6d040f name=/runtime.v1.RuntimeService/Version
	Apr 10 21:57:56 ha-150873 crio[3171]: time="2024-04-10 21:57:56.340915558Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a09cc974-3816-4258-af68-af10ea6d040f name=/runtime.v1.RuntimeService/Version
	Apr 10 21:57:56 ha-150873 crio[3171]: time="2024-04-10 21:57:56.342337380Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d42df18f-e506-4221-9a5c-0fa549abc008 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 21:57:56 ha-150873 crio[3171]: time="2024-04-10 21:57:56.342786479Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712786276342753169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d42df18f-e506-4221-9a5c-0fa549abc008 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 21:57:56 ha-150873 crio[3171]: time="2024-04-10 21:57:56.343581763Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c1fcfd55-56b8-48a0-a970-422f6a816c56 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:57:56 ha-150873 crio[3171]: time="2024-04-10 21:57:56.343651013Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c1fcfd55-56b8-48a0-a970-422f6a816c56 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:57:56 ha-150873 crio[3171]: time="2024-04-10 21:57:56.344480006Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:967394af8ee84d12844f4c1fe58d3268b52d608806a9bbe4d030c8f4fab95b20,PodSandboxId:ae41a3393e770f705bc90b8c611d2256537c88bd0fcdbe29edddc1e1954f28e0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712786022459649353,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-twk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebfddcc6-a190-4756-9096-1dc2cec68cf7,},Annotations:map[string]string{io.kubernetes.container.hash: e810504,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edff49423a0137fe750956ba320c3555c41762c96e4b52d61dd538f1387f3e8b,PodSandboxId:7b338618978c4b0d279cda17d5926927fe117a89d2af2a813a35395c3791c96b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712786020508110954,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05169d4b9723d694fde443a3079da775,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5843098e8f58e76b5e87b452629022743500b92173820f27b05241c46737470a,PodSandboxId:c3b45aeeff5a45390600af338dbb400459f46162f7f23f5596ca6a802f9f9b33,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712786011924871993,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-npbvn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ff96a1f-d71f-41b7-89ec-8cb7a94c0231,},Annotations:map[string]string{io.kubernetes.container.hash: ec06d454,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e235f69edc18857fcd2070c996c68b599ab46f71b62c95fcc7e720038bca5907,PodSandboxId:ad883c5504ae7e3d33e6f82af471e250dc687313b278e854c8c290bea79247ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712786010353918322,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6eff29f33f6e236015d4efe6b97593c,},Annotations:map[string]string{io.kubernetes.container.hash: eb0f98cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:097ab2b7a3b21e861478a5265978920652458bdb04e361253d82c88339bbf66a,PodSandboxId:de11001c92427cdbff07fc29c19039b1af5709c1f71a07ffc554492a46b5fed4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712786000463841729,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ebab890b99987fdef4351dcb63a481c,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:11b8446660febd3894b8ae348d19cb08dc586be0b366fe960017799e3ef498b9,PodSandboxId:90789bebad3c1838a125d7bfe5747cc1f67b3713764f68c2f4516d186196541a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712785996449330227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 033c01dc-895b-4eca-87b9-e5a8444c4c62,},Annotations:map[string]string{io.kubernetes.container.hash: 3e4e98cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:3934f37403fb0beca32412cd8af38217c3eaabcbd92daf292e726a56c1e6a666,PodSandboxId:9a4ff4c8cdaeb05bf27351e4ebc587695641cff2231e8fa428d7abf83e07cc07,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712785979114651690,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4k6ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82bf47-319e-444b-bb54-9f44b684bf06,},Annotations:map[string]string{io.kubernetes.container.hash: 25e1f47f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:3f65a763bc9e27d1d1cb7df78aaa507490cf2c0ef14a25459071556e5237bd19,PodSandboxId:90789bebad3c1838a125d7bfe5747cc1f67b3713764f68c2f4516d186196541a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712785979187621109,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 033c01dc-895b-4eca-87b9-e5a8444c4c62,},Annotations:map[string]string{io.kubernetes.container.hash: 3e4e98cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d63235fb
69e81b7ed849622e86a6ab34f47f6d81af7dfbce078caf844c937923,PodSandboxId:ad883c5504ae7e3d33e6f82af471e250dc687313b278e854c8c290bea79247ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712785978670292567,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6eff29f33f6e236015d4efe6b97593c,},Annotations:map[string]string{io.kubernetes.container.hash: eb0f98cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74a5b5b09ea11da219a8235e9f05f
e927caf625ef95cdbf9ddb867aa7bcddce,PodSandboxId:1d197657a29aa7b4f583e81c8633fcbfe6303b83f088706399a4781170e698ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712785978953180708,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v7npj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a44fe0-14c0-451f-b707-d129c6cb30d4,},Annotations:map[string]string{io.kubernetes.container.hash: bb0000a3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6931407fcaa90fbe7cac83ed492d072d4c9bb966e765cea62da8ff26da536b59,PodSandboxId:db6b3dc77ab9f87a6afb143347c0940716ca8a70e5967378ce9620c03baf38b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712785978833280294,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lv7pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea5fd9f-ee96-4394-bca6-29cbcf5ad31e,},Annotations:map[string]string{io.kubernetes.container.hash: 143c6295,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a7bcf4449817eb8338b07a8e5efe49e28bcc08775cb33f9491a54950a0f3757,PodSandboxId:ae41a3393e770f705bc90b8c611d2256537c88bd0fcdbe29edddc1e1954f28e0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712785978602127581,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-twk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebfddcc6-a
190-4756-9096-1dc2cec68cf7,},Annotations:map[string]string{io.kubernetes.container.hash: e810504,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30486e357194bcf533da86b9e1d1529c00dac6b511afebe2045eb8d0b254e33d,PodSandboxId:7b338618978c4b0d279cda17d5926927fe117a89d2af2a813a35395c3791c96b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712785978526775957,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05
169d4b9723d694fde443a3079da775,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd121dc8b073b74475d28df919aa7d986e22e9916bf717630de3c193f121d3bf,PodSandboxId:1426294dc588a4879a72d63f686a321e11f0043f50b3700c7d985354bedfe919,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712785978512134053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ead7f643006d5b17c362a714ce1716,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 4fdb0fc7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290a7180b1a6d450d47f0ee5459e99f09e131f3c4f6ff26fbab860c8133ae13e,PodSandboxId:e25cccd588a353348042677451b278c282a7f154e0ca5139a21c1e8d4396439d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712785978086673766,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb34a5c53d32d72142b8d7c7bfda2302,},Annotations:map[string]string{io.kubern
etes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14633f0c6042a07eb877fb35742fc7f78eaf5fc02579011e3f22392bd4705149,PodSandboxId:caa71de8a90bd8f405aa1d2b15a22b877e9efabdda5d3ab5654d3c60100c6f2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712785636537427350,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-npbvn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ff96a1f-d71f-41b7-89ec-8cb7a94c0231,},Annotations:map[string]string{io.kuberne
tes.container.hash: ec06d454,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eed089604ddb2adc879d9a9093f33fb8fdae41b062d71837f171fc366523b90,PodSandboxId:477c4e121d289241b04e5bcba6621e3c962c07d0df1a2d85195741c8508989da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712785425699209159,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lv7pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea5fd9f-ee96-4394-bca6-29cbcf5ad31e,},Annotations:map[string]string{io.kubernetes.container.hash: 143c6295,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0533fe1ed46a2b0635aaf8d09515a53eff6c3f8d37327d0c287cabdb47062d2,PodSandboxId:7dcb837166455521932d4cb9f6dc4f1c30c3bbb463ace91c9b170b13eaa35891,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712785425572440741,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-76f75df574-v7npj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a44fe0-14c0-451f-b707-d129c6cb30d4,},Annotations:map[string]string{io.kubernetes.container.hash: bb0000a3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb2a3cd16e18f44024f6ab2f1fbc983d58ea0b2f8dbeb32ab81ec676fc72e330,PodSandboxId:215d1fe94079dd52ffc980ec77268193f9b6d373850752af5c7718762a5429df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0a
cea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712785423376402271,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4k6ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82bf47-319e-444b-bb54-9f44b684bf06,},Annotations:map[string]string{io.kubernetes.container.hash: 25e1f47f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35fb1c2a3e4755b04eca6fabf4b21e19e1b19765a53119054c85ec43b017196,PodSandboxId:503f0fd82969812f24a9d05afabc98c944f7c8c319b5dd485703c8293c6cc2de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5
a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712785403622940875,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb34a5c53d32d72142b8d7c7bfda2302,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b735e1f5e9943f9daf11c84d8a1ecb16928f47d7abdcf35ccb712f504af9482,PodSandboxId:3fad80ebf2adf1ef57f94d98afe692626c0000f4a7a16f2cc6934600c687c563,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1712785403608929394,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ead7f643006d5b17c362a714ce1716,},Annotations:map[string]string{io.kubernetes.container.hash: 4fdb0fc7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c1fcfd55-56b8-48a0-a970-422f6a816c56 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:57:56 ha-150873 crio[3171]: time="2024-04-10 21:57:56.377816512Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=81fae9e5-da76-4007-8504-f9b51cd167f0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 10 21:57:56 ha-150873 crio[3171]: time="2024-04-10 21:57:56.378671872Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:c3b45aeeff5a45390600af338dbb400459f46162f7f23f5596ca6a802f9f9b33,Metadata:&PodSandboxMetadata{Name:busybox-7fdf7869d9-npbvn,Uid:9ff96a1f-d71f-41b7-89ec-8cb7a94c0231,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1712786011696357994,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7fdf7869d9-npbvn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ff96a1f-d71f-41b7-89ec-8cb7a94c0231,pod-template-hash: 7fdf7869d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-10T21:47:13.552326955Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:de11001c92427cdbff07fc29c19039b1af5709c1f71a07ffc554492a46b5fed4,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-150873,Uid:6ebab890b99987fdef4351dcb63a481c,Namespace:kube-system,Attempt:0,},State:SANDBOX
_READY,CreatedAt:1712786000339718361,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ebab890b99987fdef4351dcb63a481c,},Annotations:map[string]string{kubernetes.io/config.hash: 6ebab890b99987fdef4351dcb63a481c,kubernetes.io/config.seen: 2024-04-10T21:52:52.119091229Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:db6b3dc77ab9f87a6afb143347c0940716ca8a70e5967378ce9620c03baf38b1,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-lv7pk,Uid:3ea5fd9f-ee96-4394-bca6-29cbcf5ad31e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1712785978043509967,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-lv7pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea5fd9f-ee96-4394-bca6-29cbcf5ad31e,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024
-04-10T21:43:45.072962538Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9a4ff4c8cdaeb05bf27351e4ebc587695641cff2231e8fa428d7abf83e07cc07,Metadata:&PodSandboxMetadata{Name:kube-proxy-4k6ws,Uid:ff82bf47-319e-444b-bb54-9f44b684bf06,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1712785977994386047,Labels:map[string]string{controller-revision-hash: 7659797656,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4k6ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82bf47-319e-444b-bb54-9f44b684bf06,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-10T21:43:42.773708235Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ad883c5504ae7e3d33e6f82af471e250dc687313b278e854c8c290bea79247ae,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-150873,Uid:e6eff29f33f6e236015d4efe6b97593c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1712785977958519061,
Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6eff29f33f6e236015d4efe6b97593c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.12:8443,kubernetes.io/config.hash: e6eff29f33f6e236015d4efe6b97593c,kubernetes.io/config.seen: 2024-04-10T21:43:30.323971762Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1d197657a29aa7b4f583e81c8633fcbfe6303b83f088706399a4781170e698ee,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-v7npj,Uid:20a44fe0-14c0-451f-b707-d129c6cb30d4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1712785977951853465,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-v7npj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a44fe0-14c0-451f-b707-d129c6cb30d4,k8s-app: kub
e-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-10T21:43:45.059775825Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7b338618978c4b0d279cda17d5926927fe117a89d2af2a813a35395c3791c96b,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-150873,Uid:05169d4b9723d694fde443a3079da775,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1712785977933607023,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05169d4b9723d694fde443a3079da775,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 05169d4b9723d694fde443a3079da775,kubernetes.io/config.seen: 2024-04-10T21:43:30.323973137Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1426294dc588a4879a72d63f686a321e11f0043f50b3700c7d985354bedfe919,Metadata:&PodSandboxMet
adata{Name:etcd-ha-150873,Uid:33ead7f643006d5b17c362a714ce1716,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1712785977919416244,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ead7f643006d5b17c362a714ce1716,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.12:2379,kubernetes.io/config.hash: 33ead7f643006d5b17c362a714ce1716,kubernetes.io/config.seen: 2024-04-10T21:43:30.323967672Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:90789bebad3c1838a125d7bfe5747cc1f67b3713764f68c2f4516d186196541a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:033c01dc-895b-4eca-87b9-e5a8444c4c62,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1712785977904480597,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,
io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 033c01dc-895b-4eca-87b9-e5a8444c4c62,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-10T21:43:45.072798234Z,kubernetes.io/config.source: api,},RuntimeH
andler:,},&PodSandbox{Id:ae41a3393e770f705bc90b8c611d2256537c88bd0fcdbe29edddc1e1954f28e0,Metadata:&PodSandboxMetadata{Name:kindnet-twk5c,Uid:ebfddcc6-a190-4756-9096-1dc2cec68cf7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1712785977874136037,Labels:map[string]string{app: kindnet,controller-revision-hash: bb65b84c4,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-twk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebfddcc6-a190-4756-9096-1dc2cec68cf7,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-10T21:43:42.804960393Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e25cccd588a353348042677451b278c282a7f154e0ca5139a21c1e8d4396439d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-150873,Uid:bb34a5c53d32d72142b8d7c7bfda2302,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1712785977792890536,Labels:map[string]string{component: kube-scheduler,io.ku
bernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb34a5c53d32d72142b8d7c7bfda2302,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: bb34a5c53d32d72142b8d7c7bfda2302,kubernetes.io/config.seen: 2024-04-10T21:43:30.323974267Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:caa71de8a90bd8f405aa1d2b15a22b877e9efabdda5d3ab5654d3c60100c6f2c,Metadata:&PodSandboxMetadata{Name:busybox-7fdf7869d9-npbvn,Uid:9ff96a1f-d71f-41b7-89ec-8cb7a94c0231,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1712785633870094586,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7fdf7869d9-npbvn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ff96a1f-d71f-41b7-89ec-8cb7a94c0231,pod-template-hash: 7fdf7869d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-10T21:47:13.552326955Z,kubernetes.io/config.sou
rce: api,},RuntimeHandler:,},&PodSandbox{Id:477c4e121d289241b04e5bcba6621e3c962c07d0df1a2d85195741c8508989da,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-lv7pk,Uid:3ea5fd9f-ee96-4394-bca6-29cbcf5ad31e,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1712785425384292651,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-lv7pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea5fd9f-ee96-4394-bca6-29cbcf5ad31e,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-10T21:43:45.072962538Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7dcb837166455521932d4cb9f6dc4f1c30c3bbb463ace91c9b170b13eaa35891,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-v7npj,Uid:20a44fe0-14c0-451f-b707-d129c6cb30d4,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1712785425367305148,Labels:map[string]string{io.kubernetes.container.name: POD,io
.kubernetes.pod.name: coredns-76f75df574-v7npj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a44fe0-14c0-451f-b707-d129c6cb30d4,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-10T21:43:45.059775825Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:215d1fe94079dd52ffc980ec77268193f9b6d373850752af5c7718762a5429df,Metadata:&PodSandboxMetadata{Name:kube-proxy-4k6ws,Uid:ff82bf47-319e-444b-bb54-9f44b684bf06,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1712785423093338102,Labels:map[string]string{controller-revision-hash: 7659797656,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4k6ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82bf47-319e-444b-bb54-9f44b684bf06,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-10T21:43:42.773708235Z,kubernetes.io/config.source: api,},RuntimeHandler:,
},&PodSandbox{Id:503f0fd82969812f24a9d05afabc98c944f7c8c319b5dd485703c8293c6cc2de,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-150873,Uid:bb34a5c53d32d72142b8d7c7bfda2302,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1712785403132238934,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb34a5c53d32d72142b8d7c7bfda2302,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: bb34a5c53d32d72142b8d7c7bfda2302,kubernetes.io/config.seen: 2024-04-10T21:43:22.609030816Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3fad80ebf2adf1ef57f94d98afe692626c0000f4a7a16f2cc6934600c687c563,Metadata:&PodSandboxMetadata{Name:etcd-ha-150873,Uid:33ead7f643006d5b17c362a714ce1716,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1712785403113248461,Labels:map[string]string{component: etcd,io.kuberne
tes.container.name: POD,io.kubernetes.pod.name: etcd-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ead7f643006d5b17c362a714ce1716,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.12:2379,kubernetes.io/config.hash: 33ead7f643006d5b17c362a714ce1716,kubernetes.io/config.seen: 2024-04-10T21:43:22.609055753Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=81fae9e5-da76-4007-8504-f9b51cd167f0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 10 21:57:56 ha-150873 crio[3171]: time="2024-04-10 21:57:56.379643207Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=900d2596-532f-4a5d-acc3-7944a208d53d name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:57:56 ha-150873 crio[3171]: time="2024-04-10 21:57:56.379695686Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=900d2596-532f-4a5d-acc3-7944a208d53d name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:57:56 ha-150873 crio[3171]: time="2024-04-10 21:57:56.380302516Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:967394af8ee84d12844f4c1fe58d3268b52d608806a9bbe4d030c8f4fab95b20,PodSandboxId:ae41a3393e770f705bc90b8c611d2256537c88bd0fcdbe29edddc1e1954f28e0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712786022459649353,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-twk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebfddcc6-a190-4756-9096-1dc2cec68cf7,},Annotations:map[string]string{io.kubernetes.container.hash: e810504,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edff49423a0137fe750956ba320c3555c41762c96e4b52d61dd538f1387f3e8b,PodSandboxId:7b338618978c4b0d279cda17d5926927fe117a89d2af2a813a35395c3791c96b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712786020508110954,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05169d4b9723d694fde443a3079da775,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5843098e8f58e76b5e87b452629022743500b92173820f27b05241c46737470a,PodSandboxId:c3b45aeeff5a45390600af338dbb400459f46162f7f23f5596ca6a802f9f9b33,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712786011924871993,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-npbvn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ff96a1f-d71f-41b7-89ec-8cb7a94c0231,},Annotations:map[string]string{io.kubernetes.container.hash: ec06d454,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e235f69edc18857fcd2070c996c68b599ab46f71b62c95fcc7e720038bca5907,PodSandboxId:ad883c5504ae7e3d33e6f82af471e250dc687313b278e854c8c290bea79247ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712786010353918322,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6eff29f33f6e236015d4efe6b97593c,},Annotations:map[string]string{io.kubernetes.container.hash: eb0f98cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:097ab2b7a3b21e861478a5265978920652458bdb04e361253d82c88339bbf66a,PodSandboxId:de11001c92427cdbff07fc29c19039b1af5709c1f71a07ffc554492a46b5fed4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712786000463841729,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ebab890b99987fdef4351dcb63a481c,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:11b8446660febd3894b8ae348d19cb08dc586be0b366fe960017799e3ef498b9,PodSandboxId:90789bebad3c1838a125d7bfe5747cc1f67b3713764f68c2f4516d186196541a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712785996449330227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 033c01dc-895b-4eca-87b9-e5a8444c4c62,},Annotations:map[string]string{io.kubernetes.container.hash: 3e4e98cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:3934f37403fb0beca32412cd8af38217c3eaabcbd92daf292e726a56c1e6a666,PodSandboxId:9a4ff4c8cdaeb05bf27351e4ebc587695641cff2231e8fa428d7abf83e07cc07,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712785979114651690,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4k6ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82bf47-319e-444b-bb54-9f44b684bf06,},Annotations:map[string]string{io.kubernetes.container.hash: 25e1f47f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:3f65a763bc9e27d1d1cb7df78aaa507490cf2c0ef14a25459071556e5237bd19,PodSandboxId:90789bebad3c1838a125d7bfe5747cc1f67b3713764f68c2f4516d186196541a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712785979187621109,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 033c01dc-895b-4eca-87b9-e5a8444c4c62,},Annotations:map[string]string{io.kubernetes.container.hash: 3e4e98cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d63235fb
69e81b7ed849622e86a6ab34f47f6d81af7dfbce078caf844c937923,PodSandboxId:ad883c5504ae7e3d33e6f82af471e250dc687313b278e854c8c290bea79247ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712785978670292567,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6eff29f33f6e236015d4efe6b97593c,},Annotations:map[string]string{io.kubernetes.container.hash: eb0f98cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74a5b5b09ea11da219a8235e9f05f
e927caf625ef95cdbf9ddb867aa7bcddce,PodSandboxId:1d197657a29aa7b4f583e81c8633fcbfe6303b83f088706399a4781170e698ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712785978953180708,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v7npj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a44fe0-14c0-451f-b707-d129c6cb30d4,},Annotations:map[string]string{io.kubernetes.container.hash: bb0000a3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6931407fcaa90fbe7cac83ed492d072d4c9bb966e765cea62da8ff26da536b59,PodSandboxId:db6b3dc77ab9f87a6afb143347c0940716ca8a70e5967378ce9620c03baf38b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712785978833280294,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lv7pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea5fd9f-ee96-4394-bca6-29cbcf5ad31e,},Annotations:map[string]string{io.kubernetes.container.hash: 143c6295,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a7bcf4449817eb8338b07a8e5efe49e28bcc08775cb33f9491a54950a0f3757,PodSandboxId:ae41a3393e770f705bc90b8c611d2256537c88bd0fcdbe29edddc1e1954f28e0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712785978602127581,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-twk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebfddcc6-a
190-4756-9096-1dc2cec68cf7,},Annotations:map[string]string{io.kubernetes.container.hash: e810504,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30486e357194bcf533da86b9e1d1529c00dac6b511afebe2045eb8d0b254e33d,PodSandboxId:7b338618978c4b0d279cda17d5926927fe117a89d2af2a813a35395c3791c96b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712785978526775957,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05
169d4b9723d694fde443a3079da775,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd121dc8b073b74475d28df919aa7d986e22e9916bf717630de3c193f121d3bf,PodSandboxId:1426294dc588a4879a72d63f686a321e11f0043f50b3700c7d985354bedfe919,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712785978512134053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ead7f643006d5b17c362a714ce1716,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 4fdb0fc7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290a7180b1a6d450d47f0ee5459e99f09e131f3c4f6ff26fbab860c8133ae13e,PodSandboxId:e25cccd588a353348042677451b278c282a7f154e0ca5139a21c1e8d4396439d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712785978086673766,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb34a5c53d32d72142b8d7c7bfda2302,},Annotations:map[string]string{io.kubern
etes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14633f0c6042a07eb877fb35742fc7f78eaf5fc02579011e3f22392bd4705149,PodSandboxId:caa71de8a90bd8f405aa1d2b15a22b877e9efabdda5d3ab5654d3c60100c6f2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712785636537427350,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-npbvn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ff96a1f-d71f-41b7-89ec-8cb7a94c0231,},Annotations:map[string]string{io.kuberne
tes.container.hash: ec06d454,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eed089604ddb2adc879d9a9093f33fb8fdae41b062d71837f171fc366523b90,PodSandboxId:477c4e121d289241b04e5bcba6621e3c962c07d0df1a2d85195741c8508989da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712785425699209159,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lv7pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea5fd9f-ee96-4394-bca6-29cbcf5ad31e,},Annotations:map[string]string{io.kubernetes.container.hash: 143c6295,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0533fe1ed46a2b0635aaf8d09515a53eff6c3f8d37327d0c287cabdb47062d2,PodSandboxId:7dcb837166455521932d4cb9f6dc4f1c30c3bbb463ace91c9b170b13eaa35891,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712785425572440741,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-76f75df574-v7npj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a44fe0-14c0-451f-b707-d129c6cb30d4,},Annotations:map[string]string{io.kubernetes.container.hash: bb0000a3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb2a3cd16e18f44024f6ab2f1fbc983d58ea0b2f8dbeb32ab81ec676fc72e330,PodSandboxId:215d1fe94079dd52ffc980ec77268193f9b6d373850752af5c7718762a5429df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0a
cea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712785423376402271,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4k6ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82bf47-319e-444b-bb54-9f44b684bf06,},Annotations:map[string]string{io.kubernetes.container.hash: 25e1f47f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35fb1c2a3e4755b04eca6fabf4b21e19e1b19765a53119054c85ec43b017196,PodSandboxId:503f0fd82969812f24a9d05afabc98c944f7c8c319b5dd485703c8293c6cc2de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5
a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712785403622940875,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb34a5c53d32d72142b8d7c7bfda2302,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b735e1f5e9943f9daf11c84d8a1ecb16928f47d7abdcf35ccb712f504af9482,PodSandboxId:3fad80ebf2adf1ef57f94d98afe692626c0000f4a7a16f2cc6934600c687c563,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1712785403608929394,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ead7f643006d5b17c362a714ce1716,},Annotations:map[string]string{io.kubernetes.container.hash: 4fdb0fc7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=900d2596-532f-4a5d-acc3-7944a208d53d name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:57:56 ha-150873 crio[3171]: time="2024-04-10 21:57:56.400592828Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=efc38afa-7dec-4782-b6d7-715e2224707f name=/runtime.v1.RuntimeService/Version
	Apr 10 21:57:56 ha-150873 crio[3171]: time="2024-04-10 21:57:56.400662614Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=efc38afa-7dec-4782-b6d7-715e2224707f name=/runtime.v1.RuntimeService/Version
	Apr 10 21:57:56 ha-150873 crio[3171]: time="2024-04-10 21:57:56.403047816Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c5af352-6ad4-483f-833a-b8a3b164862c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 21:57:56 ha-150873 crio[3171]: time="2024-04-10 21:57:56.403457464Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712786276403433509,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c5af352-6ad4-483f-833a-b8a3b164862c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 21:57:56 ha-150873 crio[3171]: time="2024-04-10 21:57:56.404600281Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1210f4c8-e4b2-43c3-beec-68013142bca4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:57:56 ha-150873 crio[3171]: time="2024-04-10 21:57:56.404657428Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1210f4c8-e4b2-43c3-beec-68013142bca4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:57:56 ha-150873 crio[3171]: time="2024-04-10 21:57:56.405298664Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:967394af8ee84d12844f4c1fe58d3268b52d608806a9bbe4d030c8f4fab95b20,PodSandboxId:ae41a3393e770f705bc90b8c611d2256537c88bd0fcdbe29edddc1e1954f28e0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712786022459649353,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-twk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebfddcc6-a190-4756-9096-1dc2cec68cf7,},Annotations:map[string]string{io.kubernetes.container.hash: e810504,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edff49423a0137fe750956ba320c3555c41762c96e4b52d61dd538f1387f3e8b,PodSandboxId:7b338618978c4b0d279cda17d5926927fe117a89d2af2a813a35395c3791c96b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712786020508110954,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05169d4b9723d694fde443a3079da775,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5843098e8f58e76b5e87b452629022743500b92173820f27b05241c46737470a,PodSandboxId:c3b45aeeff5a45390600af338dbb400459f46162f7f23f5596ca6a802f9f9b33,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712786011924871993,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-npbvn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ff96a1f-d71f-41b7-89ec-8cb7a94c0231,},Annotations:map[string]string{io.kubernetes.container.hash: ec06d454,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e235f69edc18857fcd2070c996c68b599ab46f71b62c95fcc7e720038bca5907,PodSandboxId:ad883c5504ae7e3d33e6f82af471e250dc687313b278e854c8c290bea79247ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712786010353918322,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6eff29f33f6e236015d4efe6b97593c,},Annotations:map[string]string{io.kubernetes.container.hash: eb0f98cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:097ab2b7a3b21e861478a5265978920652458bdb04e361253d82c88339bbf66a,PodSandboxId:de11001c92427cdbff07fc29c19039b1af5709c1f71a07ffc554492a46b5fed4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712786000463841729,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ebab890b99987fdef4351dcb63a481c,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:11b8446660febd3894b8ae348d19cb08dc586be0b366fe960017799e3ef498b9,PodSandboxId:90789bebad3c1838a125d7bfe5747cc1f67b3713764f68c2f4516d186196541a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712785996449330227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 033c01dc-895b-4eca-87b9-e5a8444c4c62,},Annotations:map[string]string{io.kubernetes.container.hash: 3e4e98cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:3934f37403fb0beca32412cd8af38217c3eaabcbd92daf292e726a56c1e6a666,PodSandboxId:9a4ff4c8cdaeb05bf27351e4ebc587695641cff2231e8fa428d7abf83e07cc07,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712785979114651690,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4k6ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82bf47-319e-444b-bb54-9f44b684bf06,},Annotations:map[string]string{io.kubernetes.container.hash: 25e1f47f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:3f65a763bc9e27d1d1cb7df78aaa507490cf2c0ef14a25459071556e5237bd19,PodSandboxId:90789bebad3c1838a125d7bfe5747cc1f67b3713764f68c2f4516d186196541a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712785979187621109,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 033c01dc-895b-4eca-87b9-e5a8444c4c62,},Annotations:map[string]string{io.kubernetes.container.hash: 3e4e98cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d63235fb
69e81b7ed849622e86a6ab34f47f6d81af7dfbce078caf844c937923,PodSandboxId:ad883c5504ae7e3d33e6f82af471e250dc687313b278e854c8c290bea79247ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712785978670292567,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6eff29f33f6e236015d4efe6b97593c,},Annotations:map[string]string{io.kubernetes.container.hash: eb0f98cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74a5b5b09ea11da219a8235e9f05f
e927caf625ef95cdbf9ddb867aa7bcddce,PodSandboxId:1d197657a29aa7b4f583e81c8633fcbfe6303b83f088706399a4781170e698ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712785978953180708,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v7npj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a44fe0-14c0-451f-b707-d129c6cb30d4,},Annotations:map[string]string{io.kubernetes.container.hash: bb0000a3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6931407fcaa90fbe7cac83ed492d072d4c9bb966e765cea62da8ff26da536b59,PodSandboxId:db6b3dc77ab9f87a6afb143347c0940716ca8a70e5967378ce9620c03baf38b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712785978833280294,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lv7pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea5fd9f-ee96-4394-bca6-29cbcf5ad31e,},Annotations:map[string]string{io.kubernetes.container.hash: 143c6295,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a7bcf4449817eb8338b07a8e5efe49e28bcc08775cb33f9491a54950a0f3757,PodSandboxId:ae41a3393e770f705bc90b8c611d2256537c88bd0fcdbe29edddc1e1954f28e0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712785978602127581,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-twk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebfddcc6-a
190-4756-9096-1dc2cec68cf7,},Annotations:map[string]string{io.kubernetes.container.hash: e810504,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30486e357194bcf533da86b9e1d1529c00dac6b511afebe2045eb8d0b254e33d,PodSandboxId:7b338618978c4b0d279cda17d5926927fe117a89d2af2a813a35395c3791c96b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712785978526775957,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05
169d4b9723d694fde443a3079da775,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd121dc8b073b74475d28df919aa7d986e22e9916bf717630de3c193f121d3bf,PodSandboxId:1426294dc588a4879a72d63f686a321e11f0043f50b3700c7d985354bedfe919,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712785978512134053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ead7f643006d5b17c362a714ce1716,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 4fdb0fc7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290a7180b1a6d450d47f0ee5459e99f09e131f3c4f6ff26fbab860c8133ae13e,PodSandboxId:e25cccd588a353348042677451b278c282a7f154e0ca5139a21c1e8d4396439d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712785978086673766,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb34a5c53d32d72142b8d7c7bfda2302,},Annotations:map[string]string{io.kubern
etes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14633f0c6042a07eb877fb35742fc7f78eaf5fc02579011e3f22392bd4705149,PodSandboxId:caa71de8a90bd8f405aa1d2b15a22b877e9efabdda5d3ab5654d3c60100c6f2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712785636537427350,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-npbvn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ff96a1f-d71f-41b7-89ec-8cb7a94c0231,},Annotations:map[string]string{io.kuberne
tes.container.hash: ec06d454,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eed089604ddb2adc879d9a9093f33fb8fdae41b062d71837f171fc366523b90,PodSandboxId:477c4e121d289241b04e5bcba6621e3c962c07d0df1a2d85195741c8508989da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712785425699209159,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lv7pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea5fd9f-ee96-4394-bca6-29cbcf5ad31e,},Annotations:map[string]string{io.kubernetes.container.hash: 143c6295,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0533fe1ed46a2b0635aaf8d09515a53eff6c3f8d37327d0c287cabdb47062d2,PodSandboxId:7dcb837166455521932d4cb9f6dc4f1c30c3bbb463ace91c9b170b13eaa35891,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712785425572440741,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-76f75df574-v7npj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a44fe0-14c0-451f-b707-d129c6cb30d4,},Annotations:map[string]string{io.kubernetes.container.hash: bb0000a3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb2a3cd16e18f44024f6ab2f1fbc983d58ea0b2f8dbeb32ab81ec676fc72e330,PodSandboxId:215d1fe94079dd52ffc980ec77268193f9b6d373850752af5c7718762a5429df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0a
cea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712785423376402271,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4k6ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82bf47-319e-444b-bb54-9f44b684bf06,},Annotations:map[string]string{io.kubernetes.container.hash: 25e1f47f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35fb1c2a3e4755b04eca6fabf4b21e19e1b19765a53119054c85ec43b017196,PodSandboxId:503f0fd82969812f24a9d05afabc98c944f7c8c319b5dd485703c8293c6cc2de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5
a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712785403622940875,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb34a5c53d32d72142b8d7c7bfda2302,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b735e1f5e9943f9daf11c84d8a1ecb16928f47d7abdcf35ccb712f504af9482,PodSandboxId:3fad80ebf2adf1ef57f94d98afe692626c0000f4a7a16f2cc6934600c687c563,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1712785403608929394,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ead7f643006d5b17c362a714ce1716,},Annotations:map[string]string{io.kubernetes.container.hash: 4fdb0fc7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1210f4c8-e4b2-43c3-beec-68013142bca4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:57:56 ha-150873 crio[3171]: time="2024-04-10 21:57:56.441307568Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9effc88-56ed-46e9-ba8b-72ff28aa8f88 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 10 21:57:56 ha-150873 crio[3171]: time="2024-04-10 21:57:56.441720382Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:c3b45aeeff5a45390600af338dbb400459f46162f7f23f5596ca6a802f9f9b33,Metadata:&PodSandboxMetadata{Name:busybox-7fdf7869d9-npbvn,Uid:9ff96a1f-d71f-41b7-89ec-8cb7a94c0231,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1712786011696357994,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7fdf7869d9-npbvn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ff96a1f-d71f-41b7-89ec-8cb7a94c0231,pod-template-hash: 7fdf7869d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-10T21:47:13.552326955Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:de11001c92427cdbff07fc29c19039b1af5709c1f71a07ffc554492a46b5fed4,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-150873,Uid:6ebab890b99987fdef4351dcb63a481c,Namespace:kube-system,Attempt:0,},State:SANDBOX
_READY,CreatedAt:1712786000339718361,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ebab890b99987fdef4351dcb63a481c,},Annotations:map[string]string{kubernetes.io/config.hash: 6ebab890b99987fdef4351dcb63a481c,kubernetes.io/config.seen: 2024-04-10T21:52:52.119091229Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:db6b3dc77ab9f87a6afb143347c0940716ca8a70e5967378ce9620c03baf38b1,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-lv7pk,Uid:3ea5fd9f-ee96-4394-bca6-29cbcf5ad31e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1712785978043509967,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-lv7pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea5fd9f-ee96-4394-bca6-29cbcf5ad31e,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024
-04-10T21:43:45.072962538Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9a4ff4c8cdaeb05bf27351e4ebc587695641cff2231e8fa428d7abf83e07cc07,Metadata:&PodSandboxMetadata{Name:kube-proxy-4k6ws,Uid:ff82bf47-319e-444b-bb54-9f44b684bf06,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1712785977994386047,Labels:map[string]string{controller-revision-hash: 7659797656,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4k6ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82bf47-319e-444b-bb54-9f44b684bf06,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-10T21:43:42.773708235Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ad883c5504ae7e3d33e6f82af471e250dc687313b278e854c8c290bea79247ae,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-150873,Uid:e6eff29f33f6e236015d4efe6b97593c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1712785977958519061,
Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6eff29f33f6e236015d4efe6b97593c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.12:8443,kubernetes.io/config.hash: e6eff29f33f6e236015d4efe6b97593c,kubernetes.io/config.seen: 2024-04-10T21:43:30.323971762Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1d197657a29aa7b4f583e81c8633fcbfe6303b83f088706399a4781170e698ee,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-v7npj,Uid:20a44fe0-14c0-451f-b707-d129c6cb30d4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1712785977951853465,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-v7npj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a44fe0-14c0-451f-b707-d129c6cb30d4,k8s-app: kub
e-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-10T21:43:45.059775825Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7b338618978c4b0d279cda17d5926927fe117a89d2af2a813a35395c3791c96b,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-150873,Uid:05169d4b9723d694fde443a3079da775,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1712785977933607023,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05169d4b9723d694fde443a3079da775,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 05169d4b9723d694fde443a3079da775,kubernetes.io/config.seen: 2024-04-10T21:43:30.323973137Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1426294dc588a4879a72d63f686a321e11f0043f50b3700c7d985354bedfe919,Metadata:&PodSandboxMet
adata{Name:etcd-ha-150873,Uid:33ead7f643006d5b17c362a714ce1716,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1712785977919416244,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ead7f643006d5b17c362a714ce1716,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.12:2379,kubernetes.io/config.hash: 33ead7f643006d5b17c362a714ce1716,kubernetes.io/config.seen: 2024-04-10T21:43:30.323967672Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:90789bebad3c1838a125d7bfe5747cc1f67b3713764f68c2f4516d186196541a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:033c01dc-895b-4eca-87b9-e5a8444c4c62,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1712785977904480597,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,
io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 033c01dc-895b-4eca-87b9-e5a8444c4c62,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-10T21:43:45.072798234Z,kubernetes.io/config.source: api,},RuntimeH
andler:,},&PodSandbox{Id:ae41a3393e770f705bc90b8c611d2256537c88bd0fcdbe29edddc1e1954f28e0,Metadata:&PodSandboxMetadata{Name:kindnet-twk5c,Uid:ebfddcc6-a190-4756-9096-1dc2cec68cf7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1712785977874136037,Labels:map[string]string{app: kindnet,controller-revision-hash: bb65b84c4,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-twk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebfddcc6-a190-4756-9096-1dc2cec68cf7,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-10T21:43:42.804960393Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e25cccd588a353348042677451b278c282a7f154e0ca5139a21c1e8d4396439d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-150873,Uid:bb34a5c53d32d72142b8d7c7bfda2302,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1712785977792890536,Labels:map[string]string{component: kube-scheduler,io.ku
bernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb34a5c53d32d72142b8d7c7bfda2302,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: bb34a5c53d32d72142b8d7c7bfda2302,kubernetes.io/config.seen: 2024-04-10T21:43:30.323974267Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f9effc88-56ed-46e9-ba8b-72ff28aa8f88 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 10 21:57:56 ha-150873 crio[3171]: time="2024-04-10 21:57:56.443288569Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc93443f-4597-4760-b206-28d0baceba8c name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:57:56 ha-150873 crio[3171]: time="2024-04-10 21:57:56.443366996Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bc93443f-4597-4760-b206-28d0baceba8c name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 21:57:56 ha-150873 crio[3171]: time="2024-04-10 21:57:56.443937912Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:967394af8ee84d12844f4c1fe58d3268b52d608806a9bbe4d030c8f4fab95b20,PodSandboxId:ae41a3393e770f705bc90b8c611d2256537c88bd0fcdbe29edddc1e1954f28e0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712786022459649353,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-twk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebfddcc6-a190-4756-9096-1dc2cec68cf7,},Annotations:map[string]string{io.kubernetes.container.hash: e810504,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edff49423a0137fe750956ba320c3555c41762c96e4b52d61dd538f1387f3e8b,PodSandboxId:7b338618978c4b0d279cda17d5926927fe117a89d2af2a813a35395c3791c96b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712786020508110954,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05169d4b9723d694fde443a3079da775,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5843098e8f58e76b5e87b452629022743500b92173820f27b05241c46737470a,PodSandboxId:c3b45aeeff5a45390600af338dbb400459f46162f7f23f5596ca6a802f9f9b33,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712786011924871993,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-npbvn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ff96a1f-d71f-41b7-89ec-8cb7a94c0231,},Annotations:map[string]string{io.kubernetes.container.hash: ec06d454,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e235f69edc18857fcd2070c996c68b599ab46f71b62c95fcc7e720038bca5907,PodSandboxId:ad883c5504ae7e3d33e6f82af471e250dc687313b278e854c8c290bea79247ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712786010353918322,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6eff29f33f6e236015d4efe6b97593c,},Annotations:map[string]string{io.kubernetes.container.hash: eb0f98cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:097ab2b7a3b21e861478a5265978920652458bdb04e361253d82c88339bbf66a,PodSandboxId:de11001c92427cdbff07fc29c19039b1af5709c1f71a07ffc554492a46b5fed4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712786000463841729,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ebab890b99987fdef4351dcb63a481c,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:11b8446660febd3894b8ae348d19cb08dc586be0b366fe960017799e3ef498b9,PodSandboxId:90789bebad3c1838a125d7bfe5747cc1f67b3713764f68c2f4516d186196541a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712785996449330227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 033c01dc-895b-4eca-87b9-e5a8444c4c62,},Annotations:map[string]string{io.kubernetes.container.hash: 3e4e98cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:3934f37403fb0beca32412cd8af38217c3eaabcbd92daf292e726a56c1e6a666,PodSandboxId:9a4ff4c8cdaeb05bf27351e4ebc587695641cff2231e8fa428d7abf83e07cc07,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712785979114651690,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4k6ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff82bf47-319e-444b-bb54-9f44b684bf06,},Annotations:map[string]string{io.kubernetes.container.hash: 25e1f47f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:e74a5b5b09ea11da219a8235e9f05fe927caf625ef95cdbf9ddb867aa7bcddce,PodSandboxId:1d197657a29aa7b4f583e81c8633fcbfe6303b83f088706399a4781170e698ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712785978953180708,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v7npj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a44fe0-14c0-451f-b707-d129c6cb30d4,},Annotations:map[string]string{io.kubernetes.container.hash: bb0000a3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6931407fcaa90fbe7cac83ed492d072d4c9bb966e765cea62da8ff26da536b59,PodSandboxId:db6b3dc77ab9f87a6afb143347c0940716ca8a70e5967378ce9620c03baf38b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712785978833280294,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lv7pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea5fd9f-ee96-4394-bca6-29cbcf5ad31e,},Annotations:map[string]string{io.kubernetes.container.hash: 143c6295,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd121dc8b073b74475d28df919aa7d986e22e9916bf717630de3c193f121d3bf,PodSandboxId:1426294dc588a4879a72d63f686a321e11f0043f50b3700c7d985354bedfe919,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712785978512134053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 33ead7f643006d5b17c362a714ce1716,},Annotations:map[string]string{io.kubernetes.container.hash: 4fdb0fc7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290a7180b1a6d450d47f0ee5459e99f09e131f3c4f6ff26fbab860c8133ae13e,PodSandboxId:e25cccd588a353348042677451b278c282a7f154e0ca5139a21c1e8d4396439d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712785978086673766,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb34a5c5
3d32d72142b8d7c7bfda2302,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bc93443f-4597-4760-b206-28d0baceba8c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	967394af8ee84       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      4 minutes ago       Running             kindnet-cni               2                   ae41a3393e770       kindnet-twk5c
	edff49423a013       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      4 minutes ago       Running             kube-controller-manager   2                   7b338618978c4       kube-controller-manager-ha-150873
	5843098e8f58e       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   c3b45aeeff5a4       busybox-7fdf7869d9-npbvn
	e235f69edc188       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      4 minutes ago       Running             kube-apiserver            2                   ad883c5504ae7       kube-apiserver-ha-150873
	097ab2b7a3b21       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      4 minutes ago       Running             kube-vip                  0                   de11001c92427       kube-vip-ha-150873
	11b8446660feb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       2                   90789bebad3c1       storage-provisioner
	3f65a763bc9e2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       1                   90789bebad3c1       storage-provisioner
	3934f37403fb0       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      4 minutes ago       Running             kube-proxy                1                   9a4ff4c8cdaeb       kube-proxy-4k6ws
	e74a5b5b09ea1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   1d197657a29aa       coredns-76f75df574-v7npj
	6931407fcaa90       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   db6b3dc77ab9f       coredns-76f75df574-lv7pk
	d63235fb69e81       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      4 minutes ago       Exited              kube-apiserver            1                   ad883c5504ae7       kube-apiserver-ha-150873
	4a7bcf4449817       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      4 minutes ago       Exited              kindnet-cni               1                   ae41a3393e770       kindnet-twk5c
	30486e357194b       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      4 minutes ago       Exited              kube-controller-manager   1                   7b338618978c4       kube-controller-manager-ha-150873
	cd121dc8b073b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   1426294dc588a       etcd-ha-150873
	290a7180b1a6d       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      4 minutes ago       Running             kube-scheduler            1                   e25cccd588a35       kube-scheduler-ha-150873
	14633f0c6042a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago      Exited              busybox                   0                   caa71de8a90bd       busybox-7fdf7869d9-npbvn
	9eed089604ddb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago      Exited              coredns                   0                   477c4e121d289       coredns-76f75df574-lv7pk
	c0533fe1ed46a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago      Exited              coredns                   0                   7dcb837166455       coredns-76f75df574-v7npj
	fb2a3cd16e18f       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      14 minutes ago      Exited              kube-proxy                0                   215d1fe94079d       kube-proxy-4k6ws
	e35fb1c2a3e47       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      14 minutes ago      Exited              kube-scheduler            0                   503f0fd829698       kube-scheduler-ha-150873
	9b735e1f5e994       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      14 minutes ago      Exited              etcd                      0                   3fad80ebf2adf       etcd-ha-150873
	
	
	==> coredns [6931407fcaa90fbe7cac83ed492d072d4c9bb966e765cea62da8ff26da536b59] <==
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:59529 - 44673 "HINFO IN 6865337407444359154.4190026639490365014. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009281124s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[984018151]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Apr-2024 21:53:00.782) (total time: 10000ms):
	Trace[984018151]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (21:53:10.782)
	Trace[984018151]: [10.000761949s] [10.000761949s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1469142125]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Apr-2024 21:53:15.210) (total time: 10002ms):
	Trace[1469142125]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (21:53:25.212)
	Trace[1469142125]: [10.00202566s] [10.00202566s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: Trace[1635658144]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Apr-2024 21:53:12.729) (total time: 13491ms):
	Trace[1635658144]: ---"Objects listed" error:<nil> 13491ms (21:53:26.221)
	Trace[1635658144]: [13.491453637s] [13.491453637s] END
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [9eed089604ddb2adc879d9a9093f33fb8fdae41b062d71837f171fc366523b90] <==
	[INFO] 10.244.2.2:35532 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176815s
	[INFO] 10.244.2.2:46490 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00333312s
	[INFO] 10.244.2.2:59282 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000215594s
	[INFO] 10.244.2.2:58799 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.008975865s
	[INFO] 10.244.2.2:45397 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000213098s
	[INFO] 10.244.0.4:52917 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001975693s
	[INFO] 10.244.0.4:52069 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000198372s
	[INFO] 10.244.2.3:49729 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00309164s
	[INFO] 10.244.2.3:49196 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000104729s
	[INFO] 10.244.2.3:37101 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001584562s
	[INFO] 10.244.2.3:33940 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00013568s
	[INFO] 10.244.2.2:34643 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000163795s
	[INFO] 10.244.2.2:58342 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000152915s
	[INFO] 10.244.2.2:59095 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000220546s
	[INFO] 10.244.0.4:52549 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152613s
	[INFO] 10.244.0.4:41887 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065745s
	[INFO] 10.244.2.3:34633 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114623s
	[INFO] 10.244.2.3:40780 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152976s
	[INFO] 10.244.2.3:56929 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159054s
	[INFO] 10.244.0.4:44686 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000143676s
	[INFO] 10.244.2.3:45732 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00027179s
	[INFO] 10.244.2.3:51852 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000093379s
	[INFO] 10.244.2.3:37254 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000217939s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c0533fe1ed46a2b0635aaf8d09515a53eff6c3f8d37327d0c287cabdb47062d2] <==
	[INFO] 10.244.2.2:50233 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000174724s
	[INFO] 10.244.0.4:60778 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115782s
	[INFO] 10.244.0.4:55354 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000140158s
	[INFO] 10.244.0.4:34877 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072811s
	[INFO] 10.244.0.4:40982 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001537342s
	[INFO] 10.244.0.4:36482 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000139429s
	[INFO] 10.244.0.4:48167 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117261s
	[INFO] 10.244.2.3:57824 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145834s
	[INFO] 10.244.2.3:60878 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110073s
	[INFO] 10.244.2.3:50412 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088375s
	[INFO] 10.244.2.3:59910 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095369s
	[INFO] 10.244.2.2:33569 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205878s
	[INFO] 10.244.0.4:60872 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079429s
	[INFO] 10.244.0.4:48499 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000047325s
	[INFO] 10.244.2.3:41098 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109916s
	[INFO] 10.244.2.2:34300 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163456s
	[INFO] 10.244.2.2:45314 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000257345s
	[INFO] 10.244.2.2:55265 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000197798s
	[INFO] 10.244.2.2:59960 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000169363s
	[INFO] 10.244.0.4:49737 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111508s
	[INFO] 10.244.0.4:59509 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009348s
	[INFO] 10.244.0.4:38242 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000134044s
	[INFO] 10.244.2.3:43629 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000170405s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e74a5b5b09ea11da219a8235e9f05fe927caf625ef95cdbf9ddb867aa7bcddce] <==
	[INFO] 127.0.0.1:59682 - 42308 "HINFO IN 8477535371905894803.3423353897684578802. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009750105s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[237118921]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Apr-2024 21:53:00.543) (total time: 10001ms):
	Trace[237118921]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (21:53:10.545)
	Trace[237118921]: [10.001633953s] [10.001633953s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[791440332]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Apr-2024 21:53:00.580) (total time: 10002ms):
	Trace[791440332]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (21:53:10.582)
	Trace[791440332]: [10.002164677s] [10.002164677s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[524857238]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Apr-2024 21:53:02.959) (total time: 10004ms):
	Trace[524857238]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10004ms (21:53:12.964)
	Trace[524857238]: [10.004646672s] [10.004646672s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:33754->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:33754->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:33760->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:33760->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-150873
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-150873
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2
	                    minikube.k8s.io/name=ha-150873
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_10T21_43_30_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Apr 2024 21:43:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-150873
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Apr 2024 21:57:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Apr 2024 21:53:40 +0000   Wed, 10 Apr 2024 21:43:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Apr 2024 21:53:40 +0000   Wed, 10 Apr 2024 21:43:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Apr 2024 21:53:40 +0000   Wed, 10 Apr 2024 21:43:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Apr 2024 21:53:40 +0000   Wed, 10 Apr 2024 21:43:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.12
	  Hostname:    ha-150873
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 210c19b8db52473d80a14cc460d46534
	  System UUID:                210c19b8-db52-473d-80a1-4cc460d46534
	  Boot ID:                    bf770617-465c-438e-8544-6b98882b4c4e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-npbvn             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-76f75df574-lv7pk             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-76f75df574-v7npj             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-150873                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-twk5c                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-150873             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-150873    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-4k6ws                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-150873             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-150873                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m26s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m22s                  kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-150873 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-150873 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-150873 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-150873 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-150873 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-150873 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           14m                    node-controller  Node ha-150873 event: Registered Node ha-150873 in Controller
	  Normal   NodeReady                14m                    kubelet          Node ha-150873 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-150873 event: Registered Node ha-150873 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-150873 event: Registered Node ha-150873 in Controller
	  Normal   RegisteredNode           8m39s                  node-controller  Node ha-150873 event: Registered Node ha-150873 in Controller
	  Warning  ContainerGCFailed        5m26s (x2 over 6m26s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m18s                  node-controller  Node ha-150873 event: Registered Node ha-150873 in Controller
	  Normal   RegisteredNode           4m3s                   node-controller  Node ha-150873 event: Registered Node ha-150873 in Controller
	  Normal   RegisteredNode           3m4s                   node-controller  Node ha-150873 event: Registered Node ha-150873 in Controller
	
	
	Name:               ha-150873-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-150873-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2
	                    minikube.k8s.io/name=ha-150873
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_10T21_45_41_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Apr 2024 21:45:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-150873-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Apr 2024 21:57:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Apr 2024 21:56:42 +0000   Wed, 10 Apr 2024 21:56:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Apr 2024 21:56:42 +0000   Wed, 10 Apr 2024 21:56:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Apr 2024 21:56:42 +0000   Wed, 10 Apr 2024 21:56:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Apr 2024 21:56:42 +0000   Wed, 10 Apr 2024 21:56:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.213
	  Hostname:    ha-150873-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d3944eabc4864788a38a16b5adee9ecb
	  System UUID:                d3944eab-c486-4788-a38a-16b5adee9ecb
	  Boot ID:                    96f79656-01cd-45f6-920c-f1545a109dac
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-ctf8n                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 etcd-ha-150873-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-lgqxz                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-150873-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-150873-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-f5g7z                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-150873-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-150873-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 4m22s                kube-proxy       
	  Normal   Starting                 12m                  kube-proxy       
	  Normal   Starting                 8m53s                kube-proxy       
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)    kubelet          Node ha-150873-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           12m                  node-controller  Node ha-150873-m02 event: Registered Node ha-150873-m02 in Controller
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)    kubelet          Node ha-150873-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)    kubelet          Node ha-150873-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                  node-controller  Node ha-150873-m02 event: Registered Node ha-150873-m02 in Controller
	  Normal   RegisteredNode           10m                  node-controller  Node ha-150873-m02 event: Registered Node ha-150873-m02 in Controller
	  Warning  Rebooted                 9m11s                kubelet          Node ha-150873-m02 has been rebooted, boot id: 96f79656-01cd-45f6-920c-f1545a109dac
	  Normal   Starting                 9m11s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9m11s                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           8m39s                node-controller  Node ha-150873-m02 event: Registered Node ha-150873-m02 in Controller
	  Normal   RegisteredNode           4m18s                node-controller  Node ha-150873-m02 event: Registered Node ha-150873-m02 in Controller
	  Normal   RegisteredNode           4m3s                 node-controller  Node ha-150873-m02 event: Registered Node ha-150873-m02 in Controller
	  Normal   RegisteredNode           3m4s                 node-controller  Node ha-150873-m02 event: Registered Node ha-150873-m02 in Controller
	  Normal   NodeNotReady             108s                 node-controller  Node ha-150873-m02 status is now: NodeNotReady
	  Normal   NodeHasNoDiskPressure    74s (x2 over 9m11s)  kubelet          Node ha-150873-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     74s (x2 over 9m11s)  kubelet          Node ha-150873-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  74s (x2 over 9m11s)  kubelet          Node ha-150873-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                74s                  kubelet          Node ha-150873-m02 status is now: NodeReady
	
	
	Name:               ha-150873-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-150873-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2
	                    minikube.k8s.io/name=ha-150873
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_10T21_47_51_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Apr 2024 21:47:50 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-150873-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Apr 2024 21:55:28 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 10 Apr 2024 21:55:08 +0000   Wed, 10 Apr 2024 21:56:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 10 Apr 2024 21:55:08 +0000   Wed, 10 Apr 2024 21:56:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 10 Apr 2024 21:55:08 +0000   Wed, 10 Apr 2024 21:56:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 10 Apr 2024 21:55:08 +0000   Wed, 10 Apr 2024 21:56:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.144
	  Hostname:    ha-150873-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 27b922c08d3847d0b0b745258dd668cb
	  System UUID:                27b922c0-8d38-47d0-b0b7-45258dd668cb
	  Boot ID:                    67a43c58-9402-4390-ba4b-7856a43ec8b1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-ppd46    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-p9lff               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-8ttrp            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node ha-150873-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node ha-150873-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node ha-150873-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                    node-controller  Node ha-150873-m04 event: Registered Node ha-150873-m04 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-150873-m04 event: Registered Node ha-150873-m04 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-150873-m04 event: Registered Node ha-150873-m04 in Controller
	  Normal   NodeReady                9m55s                  kubelet          Node ha-150873-m04 status is now: NodeReady
	  Normal   RegisteredNode           8m39s                  node-controller  Node ha-150873-m04 event: Registered Node ha-150873-m04 in Controller
	  Normal   NodeNotReady             7m59s                  node-controller  Node ha-150873-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           4m18s                  node-controller  Node ha-150873-m04 event: Registered Node ha-150873-m04 in Controller
	  Normal   RegisteredNode           4m3s                   node-controller  Node ha-150873-m04 event: Registered Node ha-150873-m04 in Controller
	  Normal   RegisteredNode           3m4s                   node-controller  Node ha-150873-m04 event: Registered Node ha-150873-m04 in Controller
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m48s (x2 over 2m48s)  kubelet          Node ha-150873-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x2 over 2m48s)  kubelet          Node ha-150873-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x2 over 2m48s)  kubelet          Node ha-150873-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s                  kubelet          Node ha-150873-m04 has been rebooted, boot id: 67a43c58-9402-4390-ba4b-7856a43ec8b1
	  Normal   NodeReady                2m48s                  kubelet          Node ha-150873-m04 status is now: NodeReady
	  Normal   NodeNotReady             108s                   node-controller  Node ha-150873-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +7.447730] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.056947] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062963] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.170702] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.150944] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.279384] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.469027] systemd-fstab-generator[769]: Ignoring "noauto" option for root device
	[  +0.060705] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.318971] systemd-fstab-generator[959]: Ignoring "noauto" option for root device
	[  +0.060287] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.868224] systemd-fstab-generator[1385]: Ignoring "noauto" option for root device
	[  +0.092851] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.097200] kauditd_printk_skb: 21 callbacks suppressed
	[Apr10 21:45] kauditd_printk_skb: 72 callbacks suppressed
	[Apr10 21:52] systemd-fstab-generator[3090]: Ignoring "noauto" option for root device
	[  +0.169551] systemd-fstab-generator[3102]: Ignoring "noauto" option for root device
	[  +0.192650] systemd-fstab-generator[3116]: Ignoring "noauto" option for root device
	[  +0.148316] systemd-fstab-generator[3128]: Ignoring "noauto" option for root device
	[  +0.316741] systemd-fstab-generator[3156]: Ignoring "noauto" option for root device
	[  +2.656359] systemd-fstab-generator[3258]: Ignoring "noauto" option for root device
	[  +5.666994] kauditd_printk_skb: 122 callbacks suppressed
	[Apr10 21:53] kauditd_printk_skb: 83 callbacks suppressed
	[  +5.590165] kauditd_printk_skb: 6 callbacks suppressed
	[ +16.699192] kauditd_printk_skb: 11 callbacks suppressed
	[ +16.648316] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [9b735e1f5e9943f9daf11c84d8a1ecb16928f47d7abdcf35ccb712f504af9482] <==
	{"level":"warn","ts":"2024-04-10T21:51:18.251623Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.143:2380/version","remote-member-id":"9296e3e22927d2a2","error":"Get \"https://192.168.39.143:2380/version\": dial tcp 192.168.39.143:2380: i/o timeout"}
	{"level":"warn","ts":"2024-04-10T21:51:18.251729Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"9296e3e22927d2a2","error":"Get \"https://192.168.39.143:2380/version\": dial tcp 192.168.39.143:2380: i/o timeout"}
	{"level":"info","ts":"2024-04-10T21:51:18.261155Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"ce0da4c06908115c"}
	{"level":"warn","ts":"2024-04-10T21:51:18.261347Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ce0da4c06908115c"}
	{"level":"info","ts":"2024-04-10T21:51:18.261405Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ce0da4c06908115c"}
	{"level":"warn","ts":"2024-04-10T21:51:18.261515Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ce0da4c06908115c"}
	{"level":"info","ts":"2024-04-10T21:51:18.261552Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ce0da4c06908115c"}
	{"level":"info","ts":"2024-04-10T21:51:18.261712Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ab0e927fe14112bb","remote-peer-id":"ce0da4c06908115c"}
	{"level":"warn","ts":"2024-04-10T21:51:18.262117Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ab0e927fe14112bb","remote-peer-id":"ce0da4c06908115c","error":"context canceled"}
	{"level":"warn","ts":"2024-04-10T21:51:18.262207Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"ce0da4c06908115c","error":"failed to read ce0da4c06908115c on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-04-10T21:51:18.26227Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ab0e927fe14112bb","remote-peer-id":"ce0da4c06908115c"}
	{"level":"warn","ts":"2024-04-10T21:51:18.262466Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"ab0e927fe14112bb","remote-peer-id":"ce0da4c06908115c","error":"context canceled"}
	{"level":"info","ts":"2024-04-10T21:51:18.262536Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ab0e927fe14112bb","remote-peer-id":"ce0da4c06908115c"}
	{"level":"info","ts":"2024-04-10T21:51:18.262582Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"ce0da4c06908115c"}
	{"level":"info","ts":"2024-04-10T21:51:18.262614Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"9296e3e22927d2a2"}
	{"level":"info","ts":"2024-04-10T21:51:18.26265Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9296e3e22927d2a2"}
	{"level":"info","ts":"2024-04-10T21:51:18.262696Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9296e3e22927d2a2"}
	{"level":"info","ts":"2024-04-10T21:51:18.263541Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ab0e927fe14112bb","remote-peer-id":"9296e3e22927d2a2"}
	{"level":"info","ts":"2024-04-10T21:51:18.266764Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ab0e927fe14112bb","remote-peer-id":"9296e3e22927d2a2"}
	{"level":"info","ts":"2024-04-10T21:51:18.266883Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ab0e927fe14112bb","remote-peer-id":"9296e3e22927d2a2"}
	{"level":"info","ts":"2024-04-10T21:51:18.266925Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"9296e3e22927d2a2"}
	{"level":"info","ts":"2024-04-10T21:51:18.271774Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.12:2380"}
	{"level":"warn","ts":"2024-04-10T21:51:18.272072Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.213:39090","server-name":"","error":"read tcp 192.168.39.12:2380->192.168.39.213:39090: use of closed network connection"}
	{"level":"info","ts":"2024-04-10T21:51:18.823098Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.12:2380"}
	{"level":"info","ts":"2024-04-10T21:51:18.823212Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-150873","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.12:2380"],"advertise-client-urls":["https://192.168.39.12:2379"]}
	
	
	==> etcd [cd121dc8b073b74475d28df919aa7d986e22e9916bf717630de3c193f121d3bf] <==
	{"level":"info","ts":"2024-04-10T21:54:31.892565Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"ab0e927fe14112bb","remote-peer-id":"9296e3e22927d2a2"}
	{"level":"info","ts":"2024-04-10T21:54:31.900889Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ab0e927fe14112bb","remote-peer-id":"9296e3e22927d2a2"}
	{"level":"info","ts":"2024-04-10T21:54:31.917631Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"ab0e927fe14112bb","to":"9296e3e22927d2a2","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-04-10T21:54:31.919836Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"ab0e927fe14112bb","remote-peer-id":"9296e3e22927d2a2"}
	{"level":"info","ts":"2024-04-10T21:54:31.91968Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"ab0e927fe14112bb","to":"9296e3e22927d2a2","stream-type":"stream Message"}
	{"level":"info","ts":"2024-04-10T21:54:31.920113Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"ab0e927fe14112bb","remote-peer-id":"9296e3e22927d2a2"}
	{"level":"warn","ts":"2024-04-10T21:54:34.502007Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"9296e3e22927d2a2","rtt":"0s","error":"dial tcp 192.168.39.143:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-10T21:54:34.50211Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"9296e3e22927d2a2","rtt":"0s","error":"dial tcp 192.168.39.143:2380: connect: connection refused"}
	{"level":"info","ts":"2024-04-10T21:55:22.325412Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ab0e927fe14112bb switched to configuration voters=(12325950308097266363 14847704692813205852)"}
	{"level":"info","ts":"2024-04-10T21:55:22.327767Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"5f0195cf24a31222","local-member-id":"ab0e927fe14112bb","removed-remote-peer-id":"9296e3e22927d2a2","removed-remote-peer-urls":["https://192.168.39.143:2380"]}
	{"level":"info","ts":"2024-04-10T21:55:22.3279Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"9296e3e22927d2a2"}
	{"level":"warn","ts":"2024-04-10T21:55:22.329119Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9296e3e22927d2a2"}
	{"level":"info","ts":"2024-04-10T21:55:22.329183Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9296e3e22927d2a2"}
	{"level":"warn","ts":"2024-04-10T21:55:22.329653Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9296e3e22927d2a2"}
	{"level":"info","ts":"2024-04-10T21:55:22.329699Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9296e3e22927d2a2"}
	{"level":"info","ts":"2024-04-10T21:55:22.329848Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ab0e927fe14112bb","remote-peer-id":"9296e3e22927d2a2"}
	{"level":"warn","ts":"2024-04-10T21:55:22.330067Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ab0e927fe14112bb","remote-peer-id":"9296e3e22927d2a2","error":"context canceled"}
	{"level":"warn","ts":"2024-04-10T21:55:22.330129Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"9296e3e22927d2a2","error":"failed to read 9296e3e22927d2a2 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-04-10T21:55:22.330169Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ab0e927fe14112bb","remote-peer-id":"9296e3e22927d2a2"}
	{"level":"warn","ts":"2024-04-10T21:55:22.33026Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"ab0e927fe14112bb","remote-peer-id":"9296e3e22927d2a2","error":"context canceled"}
	{"level":"info","ts":"2024-04-10T21:55:22.33031Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ab0e927fe14112bb","remote-peer-id":"9296e3e22927d2a2"}
	{"level":"info","ts":"2024-04-10T21:55:22.330329Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"9296e3e22927d2a2"}
	{"level":"info","ts":"2024-04-10T21:55:22.330343Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"ab0e927fe14112bb","removed-remote-peer-id":"9296e3e22927d2a2"}
	{"level":"warn","ts":"2024-04-10T21:55:22.348385Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"ab0e927fe14112bb","remote-peer-id-stream-handler":"ab0e927fe14112bb","remote-peer-id-from":"9296e3e22927d2a2"}
	{"level":"warn","ts":"2024-04-10T21:55:22.354667Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.143:55696","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:57:57 up 15 min,  0 users,  load average: 0.23, 0.34, 0.26
	Linux ha-150873 5.10.207 #1 SMP Wed Apr 10 14:57:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4a7bcf4449817eb8338b07a8e5efe49e28bcc08775cb33f9491a54950a0f3757] <==
	I0410 21:52:59.385771       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0410 21:52:59.387907       1 main.go:107] hostIP = 192.168.39.12
	podIP = 192.168.39.12
	I0410 21:52:59.388262       1 main.go:116] setting mtu 1500 for CNI 
	I0410 21:52:59.388328       1 main.go:146] kindnetd IP family: "ipv4"
	I0410 21:52:59.388715       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0410 21:53:09.651210       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0410 21:53:09.652240       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0410 21:53:21.331104       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 192.168.122.62:57886->10.96.0.1:443: read: connection reset by peer
	I0410 21:53:23.334966       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0410 21:53:26.336830       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [967394af8ee84d12844f4c1fe58d3268b52d608806a9bbe4d030c8f4fab95b20] <==
	I0410 21:57:13.651229       1 main.go:250] Node ha-150873-m04 has CIDR [10.244.3.0/24] 
	I0410 21:57:23.658649       1 main.go:223] Handling node with IPs: map[192.168.39.12:{}]
	I0410 21:57:23.658699       1 main.go:227] handling current node
	I0410 21:57:23.658715       1 main.go:223] Handling node with IPs: map[192.168.39.213:{}]
	I0410 21:57:23.658721       1 main.go:250] Node ha-150873-m02 has CIDR [10.244.1.0/24] 
	I0410 21:57:23.658851       1 main.go:223] Handling node with IPs: map[192.168.39.144:{}]
	I0410 21:57:23.658857       1 main.go:250] Node ha-150873-m04 has CIDR [10.244.3.0/24] 
	I0410 21:57:33.675955       1 main.go:223] Handling node with IPs: map[192.168.39.12:{}]
	I0410 21:57:33.676187       1 main.go:227] handling current node
	I0410 21:57:33.676203       1 main.go:223] Handling node with IPs: map[192.168.39.213:{}]
	I0410 21:57:33.676211       1 main.go:250] Node ha-150873-m02 has CIDR [10.244.1.0/24] 
	I0410 21:57:33.677225       1 main.go:223] Handling node with IPs: map[192.168.39.144:{}]
	I0410 21:57:33.677283       1 main.go:250] Node ha-150873-m04 has CIDR [10.244.3.0/24] 
	I0410 21:57:43.683621       1 main.go:223] Handling node with IPs: map[192.168.39.12:{}]
	I0410 21:57:43.683888       1 main.go:227] handling current node
	I0410 21:57:43.683972       1 main.go:223] Handling node with IPs: map[192.168.39.213:{}]
	I0410 21:57:43.684061       1 main.go:250] Node ha-150873-m02 has CIDR [10.244.1.0/24] 
	I0410 21:57:43.684239       1 main.go:223] Handling node with IPs: map[192.168.39.144:{}]
	I0410 21:57:43.684292       1 main.go:250] Node ha-150873-m04 has CIDR [10.244.3.0/24] 
	I0410 21:57:53.690636       1 main.go:223] Handling node with IPs: map[192.168.39.12:{}]
	I0410 21:57:53.690736       1 main.go:227] handling current node
	I0410 21:57:53.690763       1 main.go:223] Handling node with IPs: map[192.168.39.213:{}]
	I0410 21:57:53.690782       1 main.go:250] Node ha-150873-m02 has CIDR [10.244.1.0/24] 
	I0410 21:57:53.690883       1 main.go:223] Handling node with IPs: map[192.168.39.144:{}]
	I0410 21:57:53.691026       1 main.go:250] Node ha-150873-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [d63235fb69e81b7ed849622e86a6ab34f47f6d81af7dfbce078caf844c937923] <==
	I0410 21:52:59.530397       1 options.go:222] external host was not specified, using 192.168.39.12
	I0410 21:52:59.532350       1 server.go:148] Version: v1.29.3
	I0410 21:52:59.532399       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0410 21:53:00.300095       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0410 21:53:00.309296       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0410 21:53:00.309373       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0410 21:53:00.309772       1 instance.go:297] Using reconciler: lease
	W0410 21:53:20.295752       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0410 21:53:20.297580       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0410 21:53:20.311514       1 instance.go:290] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [e235f69edc18857fcd2070c996c68b599ab46f71b62c95fcc7e720038bca5907] <==
	I0410 21:53:32.542379       1 establishing_controller.go:76] Starting EstablishingController
	I0410 21:53:32.542415       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0410 21:53:32.542485       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0410 21:53:32.542524       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0410 21:53:32.543753       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0410 21:53:32.543790       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0410 21:53:32.644233       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0410 21:53:32.645746       1 aggregator.go:165] initial CRD sync complete...
	I0410 21:53:32.645785       1 autoregister_controller.go:141] Starting autoregister controller
	I0410 21:53:32.645794       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0410 21:53:32.647236       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0410 21:53:32.647268       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0410 21:53:32.661864       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0410 21:53:32.671939       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	W0410 21:53:32.709057       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.213]
	I0410 21:53:32.711221       1 controller.go:624] quota admission added evaluator for: endpoints
	I0410 21:53:32.728787       1 shared_informer.go:318] Caches are synced for configmaps
	I0410 21:53:32.731252       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0410 21:53:32.732864       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0410 21:53:32.738893       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0410 21:53:32.739301       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E0410 21:53:32.743789       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0410 21:53:32.747674       1 cache.go:39] Caches are synced for autoregister controller
	I0410 21:53:33.537493       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0410 21:53:33.978663       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.12 192.168.39.213]
	
	
	==> kube-controller-manager [30486e357194bcf533da86b9e1d1529c00dac6b511afebe2045eb8d0b254e33d] <==
	I0410 21:53:00.473448       1 serving.go:380] Generated self-signed cert in-memory
	I0410 21:53:00.911642       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0410 21:53:00.911695       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0410 21:53:00.913765       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0410 21:53:00.913925       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0410 21:53:00.914811       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0410 21:53:00.914887       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0410 21:53:21.321644       1 controllermanager.go:232] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.12:8443/healthz\": dial tcp 192.168.39.12:8443: connect: connection refused"
	
	
	==> kube-controller-manager [edff49423a0137fe750956ba320c3555c41762c96e4b52d61dd538f1387f3e8b] <==
	E0410 21:55:53.013188       1 gc_controller.go:153] "Failed to get node" err="node \"ha-150873-m03\" not found" node="ha-150873-m03"
	E0410 21:55:53.013219       1 gc_controller.go:153] "Failed to get node" err="node \"ha-150873-m03\" not found" node="ha-150873-m03"
	E0410 21:55:53.013249       1 gc_controller.go:153] "Failed to get node" err="node \"ha-150873-m03\" not found" node="ha-150873-m03"
	I0410 21:56:08.209362       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-150873-m04"
	I0410 21:56:08.210654       1 event.go:376] "Event occurred" object="ha-150873-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-150873-m02 status is now: NodeNotReady"
	I0410 21:56:08.227140       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-ctf8n" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0410 21:56:08.255874       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="30.430973ms"
	I0410 21:56:08.257244       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="400.381µs"
	I0410 21:56:08.256479       1 event.go:376] "Event occurred" object="kube-system/etcd-ha-150873-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0410 21:56:08.284865       1 event.go:376] "Event occurred" object="kube-system/kindnet-lgqxz" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0410 21:56:08.326070       1 event.go:376] "Event occurred" object="kube-system/kube-apiserver-ha-150873-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0410 21:56:08.348376       1 event.go:376] "Event occurred" object="kube-system/kube-controller-manager-ha-150873-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0410 21:56:08.382581       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-f5g7z" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0410 21:56:08.419090       1 event.go:376] "Event occurred" object="kube-system/kube-scheduler-ha-150873-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0410 21:56:08.439474       1 event.go:376] "Event occurred" object="kube-system/kube-vip-ha-150873-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0410 21:56:08.763395       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="17.234433ms"
	I0410 21:56:08.763685       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="114.801µs"
	E0410 21:56:13.014407       1 gc_controller.go:153] "Failed to get node" err="node \"ha-150873-m03\" not found" node="ha-150873-m03"
	E0410 21:56:13.014476       1 gc_controller.go:153] "Failed to get node" err="node \"ha-150873-m03\" not found" node="ha-150873-m03"
	E0410 21:56:13.014485       1 gc_controller.go:153] "Failed to get node" err="node \"ha-150873-m03\" not found" node="ha-150873-m03"
	E0410 21:56:13.014491       1 gc_controller.go:153] "Failed to get node" err="node \"ha-150873-m03\" not found" node="ha-150873-m03"
	E0410 21:56:13.014497       1 gc_controller.go:153] "Failed to get node" err="node \"ha-150873-m03\" not found" node="ha-150873-m03"
	I0410 21:56:43.463805       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-ctf8n" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-ctf8n"
	I0410 21:56:45.759207       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="15.605005ms"
	I0410 21:56:45.761287       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="71.122µs"
	
	
	==> kube-proxy [3934f37403fb0beca32412cd8af38217c3eaabcbd92daf292e726a56c1e6a666] <==
	I0410 21:53:00.635114       1 server_others.go:72] "Using iptables proxy"
	E0410 21:53:10.639387       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-150873\": net/http: TLS handshake timeout"
	E0410 21:53:23.824509       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-150873\": dial tcp 192.168.39.254:8443: connect: no route to host - error from a previous attempt: read tcp 192.168.39.254:54440->192.168.39.254:8443: read: connection reset by peer"
	E0410 21:53:29.968692       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-150873\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0410 21:53:34.072077       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.12"]
	I0410 21:53:34.119667       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0410 21:53:34.119698       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0410 21:53:34.119757       1 server_others.go:168] "Using iptables Proxier"
	I0410 21:53:34.122961       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0410 21:53:34.123431       1 server.go:865] "Version info" version="v1.29.3"
	I0410 21:53:34.123520       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0410 21:53:34.125671       1 config.go:188] "Starting service config controller"
	I0410 21:53:34.125768       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0410 21:53:34.125820       1 config.go:97] "Starting endpoint slice config controller"
	I0410 21:53:34.125843       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0410 21:53:34.127857       1 config.go:315] "Starting node config controller"
	I0410 21:53:34.127897       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0410 21:53:34.226948       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0410 21:53:34.227076       1 shared_informer.go:318] Caches are synced for service config
	I0410 21:53:34.228561       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [fb2a3cd16e18f44024f6ab2f1fbc983d58ea0b2f8dbeb32ab81ec676fc72e330] <==
	I0410 21:43:43.749344       1 server_others.go:72] "Using iptables proxy"
	I0410 21:43:43.784502       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.12"]
	I0410 21:43:43.854319       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0410 21:43:43.854383       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0410 21:43:43.854398       1 server_others.go:168] "Using iptables Proxier"
	I0410 21:43:43.858159       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0410 21:43:43.858833       1 server.go:865] "Version info" version="v1.29.3"
	I0410 21:43:43.858869       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0410 21:43:43.860754       1 config.go:188] "Starting service config controller"
	I0410 21:43:43.861101       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0410 21:43:43.861154       1 config.go:97] "Starting endpoint slice config controller"
	I0410 21:43:43.861161       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0410 21:43:43.862168       1 config.go:315] "Starting node config controller"
	I0410 21:43:43.862199       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0410 21:43:43.961250       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0410 21:43:43.961252       1 shared_informer.go:318] Caches are synced for service config
	I0410 21:43:43.963072       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [290a7180b1a6d450d47f0ee5459e99f09e131f3c4f6ff26fbab860c8133ae13e] <==
	W0410 21:53:30.130751       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: Get "https://192.168.39.12:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0410 21:53:30.130844       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.12:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	W0410 21:53:30.210193       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: Get "https://192.168.39.12:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0410 21:53:30.210262       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.12:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	W0410 21:53:30.457616       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://192.168.39.12:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0410 21:53:30.457698       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.12:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	W0410 21:53:30.562591       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://192.168.39.12:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0410 21:53:30.562742       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.12:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	W0410 21:53:30.576712       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: Get "https://192.168.39.12:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0410 21:53:30.576837       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.12:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	W0410 21:53:32.569826       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0410 21:53:32.571178       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0410 21:53:32.571410       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0410 21:53:32.573114       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0410 21:53:32.573459       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0410 21:53:32.573561       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0410 21:53:35.327664       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0410 21:55:18.951856       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-ppd46\": pod busybox-7fdf7869d9-ppd46 is already assigned to node \"ha-150873-m04\"" plugin="DefaultBinder" pod="default/busybox-7fdf7869d9-ppd46" node="ha-150873-m04"
	E0410 21:55:18.956645       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod bedc9cec-30f2-4dda-b572-e1a30a1951c9(default/busybox-7fdf7869d9-ppd46) wasn't assumed so cannot be forgotten"
	E0410 21:55:18.957106       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-ppd46\": pod busybox-7fdf7869d9-ppd46 is already assigned to node \"ha-150873-m04\"" pod="default/busybox-7fdf7869d9-ppd46"
	I0410 21:55:18.957304       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7fdf7869d9-ppd46" node="ha-150873-m04"
	E0410 21:55:18.995269       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-ctf8n\": pod busybox-7fdf7869d9-ctf8n is already assigned to node \"ha-150873-m02\"" plugin="DefaultBinder" pod="default/busybox-7fdf7869d9-ctf8n" node="ha-150873-m02"
	E0410 21:55:18.995367       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod be49c9b5-a953-4372-8c00-c26a6972ecb7(default/busybox-7fdf7869d9-ctf8n) wasn't assumed so cannot be forgotten"
	E0410 21:55:18.995415       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-ctf8n\": pod busybox-7fdf7869d9-ctf8n is already assigned to node \"ha-150873-m02\"" pod="default/busybox-7fdf7869d9-ctf8n"
	I0410 21:55:18.995441       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7fdf7869d9-ctf8n" node="ha-150873-m02"
	
	
	==> kube-scheduler [e35fb1c2a3e4755b04eca6fabf4b21e19e1b19765a53119054c85ec43b017196] <==
	E0410 21:43:28.107358       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0410 21:43:28.162684       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0410 21:43:28.162743       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0410 21:43:28.163695       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0410 21:43:28.163748       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0410 21:43:31.036697       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0410 21:47:13.512657       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-c58s7\": pod busybox-7fdf7869d9-c58s7 is already assigned to node \"ha-150873-m03\"" plugin="DefaultBinder" pod="default/busybox-7fdf7869d9-c58s7" node="ha-150873-m03"
	E0410 21:47:13.513348       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-c58s7\": pod busybox-7fdf7869d9-c58s7 is already assigned to node \"ha-150873-m03\"" pod="default/busybox-7fdf7869d9-c58s7"
	I0410 21:47:13.559078       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="9ff96a1f-d71f-41b7-89ec-8cb7a94c0231" pod="default/busybox-7fdf7869d9-npbvn" assumedNode="ha-150873" currentNode="ha-150873-m02"
	E0410 21:47:13.566632       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-v9dkg\": pod busybox-7fdf7869d9-v9dkg is already assigned to node \"ha-150873-m03\"" plugin="DefaultBinder" pod="default/busybox-7fdf7869d9-v9dkg" node="ha-150873-m03"
	E0410 21:47:13.567918       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod cb1241f3-24e3-42dc-999d-813be2d647d3(default/busybox-7fdf7869d9-v9dkg) wasn't assumed so cannot be forgotten"
	E0410 21:47:13.570306       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-npbvn\": pod busybox-7fdf7869d9-npbvn is already assigned to node \"ha-150873\"" plugin="DefaultBinder" pod="default/busybox-7fdf7869d9-npbvn" node="ha-150873-m02"
	E0410 21:47:13.570933       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 9ff96a1f-d71f-41b7-89ec-8cb7a94c0231(default/busybox-7fdf7869d9-npbvn) was assumed on ha-150873-m02 but assigned to ha-150873"
	E0410 21:47:13.571181       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-npbvn\": pod busybox-7fdf7869d9-npbvn is already assigned to node \"ha-150873\"" pod="default/busybox-7fdf7869d9-npbvn"
	I0410 21:47:13.571297       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7fdf7869d9-npbvn" node="ha-150873"
	E0410 21:47:13.573202       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-v9dkg\": pod busybox-7fdf7869d9-v9dkg is already assigned to node \"ha-150873-m03\"" pod="default/busybox-7fdf7869d9-v9dkg"
	I0410 21:47:13.573415       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7fdf7869d9-v9dkg" node="ha-150873-m03"
	E0410 21:47:51.027492       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-8ttrp\": pod kube-proxy-8ttrp is already assigned to node \"ha-150873-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-8ttrp" node="ha-150873-m04"
	E0410 21:47:51.027849       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod fc2bb477-e139-43c4-a27a-00a2c214d2d3(kube-system/kube-proxy-8ttrp) wasn't assumed so cannot be forgotten"
	E0410 21:47:51.028127       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-8ttrp\": pod kube-proxy-8ttrp is already assigned to node \"ha-150873-m04\"" pod="kube-system/kube-proxy-8ttrp"
	I0410 21:47:51.028323       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-8ttrp" node="ha-150873-m04"
	E0410 21:47:51.044462       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-p9lff\": pod kindnet-p9lff is already assigned to node \"ha-150873-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-p9lff" node="ha-150873-m04"
	E0410 21:47:51.044538       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 3e6cb7bd-84c9-4146-a1a0-32e97b598ec2(kube-system/kindnet-p9lff) wasn't assumed so cannot be forgotten"
	E0410 21:47:51.044623       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-p9lff\": pod kindnet-p9lff is already assigned to node \"ha-150873-m04\"" pod="kube-system/kindnet-p9lff"
	I0410 21:47:51.044656       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-p9lff" node="ha-150873-m04"
	
	
	==> kubelet <==
	Apr 10 21:53:42 ha-150873 kubelet[1392]: I0410 21:53:42.440441    1392 scope.go:117] "RemoveContainer" containerID="4a7bcf4449817eb8338b07a8e5efe49e28bcc08775cb33f9491a54950a0f3757"
	Apr 10 21:54:11 ha-150873 kubelet[1392]: I0410 21:54:11.630090    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-7fdf7869d9-npbvn" podStartSLOduration=416.31075719 podStartE2EDuration="6m58.62995677s" podCreationTimestamp="2024-04-10 21:47:13 +0000 UTC" firstStartedPulling="2024-04-10 21:47:14.204073658 +0000 UTC m=+224.015178302" lastFinishedPulling="2024-04-10 21:47:16.523273242 +0000 UTC m=+226.334377882" observedRunningTime="2024-04-10 21:47:17.501050937 +0000 UTC m=+227.312155596" watchObservedRunningTime="2024-04-10 21:54:11.62995677 +0000 UTC m=+641.441061429"
	Apr 10 21:54:30 ha-150873 kubelet[1392]: I0410 21:54:30.440873    1392 kubelet.go:1903] "Trying to delete pod" pod="kube-system/kube-vip-ha-150873" podUID="ec4b952a-61d5-469d-a526-74228e791782"
	Apr 10 21:54:30 ha-150873 kubelet[1392]: I0410 21:54:30.465317    1392 kubelet.go:1908] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-150873"
	Apr 10 21:54:30 ha-150873 kubelet[1392]: E0410 21:54:30.517725    1392 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 10 21:54:30 ha-150873 kubelet[1392]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 10 21:54:30 ha-150873 kubelet[1392]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 10 21:54:30 ha-150873 kubelet[1392]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 10 21:54:30 ha-150873 kubelet[1392]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 10 21:54:40 ha-150873 kubelet[1392]: I0410 21:54:40.462166    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-vip-ha-150873" podStartSLOduration=10.462051919 podStartE2EDuration="10.462051919s" podCreationTimestamp="2024-04-10 21:54:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-10 21:54:40.461606086 +0000 UTC m=+670.272710745" watchObservedRunningTime="2024-04-10 21:54:40.462051919 +0000 UTC m=+670.273156577"
	Apr 10 21:55:30 ha-150873 kubelet[1392]: E0410 21:55:30.515944    1392 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 10 21:55:30 ha-150873 kubelet[1392]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 10 21:55:30 ha-150873 kubelet[1392]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 10 21:55:30 ha-150873 kubelet[1392]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 10 21:55:30 ha-150873 kubelet[1392]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 10 21:56:30 ha-150873 kubelet[1392]: E0410 21:56:30.513877    1392 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 10 21:56:30 ha-150873 kubelet[1392]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 10 21:56:30 ha-150873 kubelet[1392]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 10 21:56:30 ha-150873 kubelet[1392]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 10 21:56:30 ha-150873 kubelet[1392]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 10 21:57:30 ha-150873 kubelet[1392]: E0410 21:57:30.516030    1392 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 10 21:57:30 ha-150873 kubelet[1392]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 10 21:57:30 ha-150873 kubelet[1392]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 10 21:57:30 ha-150873 kubelet[1392]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 10 21:57:30 ha-150873 kubelet[1392]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0410 21:57:55.900673   31109 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18610-5679/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-150873 -n ha-150873
helpers_test.go:261: (dbg) Run:  kubectl --context ha-150873 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.04s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (306.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-824789
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-824789
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-824789: exit status 82 (2m2.048235987s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-824789-m03"  ...
	* Stopping node "multinode-824789-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-824789" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-824789 --wait=true -v=8 --alsologtostderr
E0410 22:15:02.657145   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
E0410 22:16:54.111996   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
E0410 22:16:59.611366   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-824789 --wait=true -v=8 --alsologtostderr: (3m1.8510191s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-824789
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-824789 -n multinode-824789
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-824789 logs -n 25: (1.701108802s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| ssh     | multinode-824789 ssh -n                                                                 | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | multinode-824789-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-824789 cp multinode-824789-m02:/home/docker/cp-test.txt                       | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2014130066/001/cp-test_multinode-824789-m02.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-824789 ssh -n                                                                 | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | multinode-824789-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-824789 cp multinode-824789-m02:/home/docker/cp-test.txt                       | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | multinode-824789:/home/docker/cp-test_multinode-824789-m02_multinode-824789.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-824789 ssh -n                                                                 | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | multinode-824789-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-824789 ssh -n multinode-824789 sudo cat                                       | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | /home/docker/cp-test_multinode-824789-m02_multinode-824789.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-824789 cp multinode-824789-m02:/home/docker/cp-test.txt                       | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | multinode-824789-m03:/home/docker/cp-test_multinode-824789-m02_multinode-824789-m03.txt |                  |         |                |                     |                     |
	| ssh     | multinode-824789 ssh -n                                                                 | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | multinode-824789-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-824789 ssh -n multinode-824789-m03 sudo cat                                   | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | /home/docker/cp-test_multinode-824789-m02_multinode-824789-m03.txt                      |                  |         |                |                     |                     |
	| cp      | multinode-824789 cp testdata/cp-test.txt                                                | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | multinode-824789-m03:/home/docker/cp-test.txt                                           |                  |         |                |                     |                     |
	| ssh     | multinode-824789 ssh -n                                                                 | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | multinode-824789-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-824789 cp multinode-824789-m03:/home/docker/cp-test.txt                       | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2014130066/001/cp-test_multinode-824789-m03.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-824789 ssh -n                                                                 | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | multinode-824789-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-824789 cp multinode-824789-m03:/home/docker/cp-test.txt                       | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | multinode-824789:/home/docker/cp-test_multinode-824789-m03_multinode-824789.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-824789 ssh -n                                                                 | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | multinode-824789-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-824789 ssh -n multinode-824789 sudo cat                                       | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | /home/docker/cp-test_multinode-824789-m03_multinode-824789.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-824789 cp multinode-824789-m03:/home/docker/cp-test.txt                       | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | multinode-824789-m02:/home/docker/cp-test_multinode-824789-m03_multinode-824789-m02.txt |                  |         |                |                     |                     |
	| ssh     | multinode-824789 ssh -n                                                                 | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | multinode-824789-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-824789 ssh -n multinode-824789-m02 sudo cat                                   | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | /home/docker/cp-test_multinode-824789-m03_multinode-824789-m02.txt                      |                  |         |                |                     |                     |
	| node    | multinode-824789 node stop m03                                                          | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	| node    | multinode-824789 node start                                                             | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |                |                     |                     |
	| node    | list -p multinode-824789                                                                | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC |                     |
	| stop    | -p multinode-824789                                                                     | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC |                     |
	| start   | -p multinode-824789                                                                     | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:14 UTC | 10 Apr 24 22:17 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |                |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |                |                     |                     |
	| node    | list -p multinode-824789                                                                | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:17 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/10 22:14:53
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0410 22:14:53.628490   40336 out.go:291] Setting OutFile to fd 1 ...
	I0410 22:14:53.628629   40336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:14:53.628649   40336 out.go:304] Setting ErrFile to fd 2...
	I0410 22:14:53.628653   40336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:14:53.628881   40336 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 22:14:53.629430   40336 out.go:298] Setting JSON to false
	I0410 22:14:53.630334   40336 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3436,"bootTime":1712783858,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0410 22:14:53.630391   40336 start.go:139] virtualization: kvm guest
	I0410 22:14:53.632834   40336 out.go:177] * [multinode-824789] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0410 22:14:53.634717   40336 out.go:177]   - MINIKUBE_LOCATION=18610
	I0410 22:14:53.634681   40336 notify.go:220] Checking for updates...
	I0410 22:14:53.637404   40336 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0410 22:14:53.638739   40336 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:14:53.639991   40336 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 22:14:53.641393   40336 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0410 22:14:53.642680   40336 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0410 22:14:53.644681   40336 config.go:182] Loaded profile config "multinode-824789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:14:53.644829   40336 driver.go:392] Setting default libvirt URI to qemu:///system
	I0410 22:14:53.645345   40336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:14:53.645392   40336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:14:53.660256   40336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43223
	I0410 22:14:53.660691   40336 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:14:53.661293   40336 main.go:141] libmachine: Using API Version  1
	I0410 22:14:53.661312   40336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:14:53.661579   40336 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:14:53.661763   40336 main.go:141] libmachine: (multinode-824789) Calling .DriverName
	I0410 22:14:53.696461   40336 out.go:177] * Using the kvm2 driver based on existing profile
	I0410 22:14:53.698019   40336 start.go:297] selected driver: kvm2
	I0410 22:14:53.698038   40336 start.go:901] validating driver "kvm2" against &{Name:multinode-824789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.29.3 ClusterName:multinode-824789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.85 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.224 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:14:53.698231   40336 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0410 22:14:53.698669   40336 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 22:14:53.698748   40336 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18610-5679/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0410 22:14:53.713567   40336 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0410 22:14:53.714442   40336 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 22:14:53.714527   40336 cni.go:84] Creating CNI manager for ""
	I0410 22:14:53.714543   40336 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0410 22:14:53.714706   40336 start.go:340] cluster config:
	{Name:multinode-824789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-824789 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.85 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.224 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:14:53.714936   40336 iso.go:125] acquiring lock: {Name:mk5a268a8a4dbf02629446a098c1de60574dce10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 22:14:53.717727   40336 out.go:177] * Starting "multinode-824789" primary control-plane node in "multinode-824789" cluster
	I0410 22:14:53.719399   40336 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 22:14:53.719434   40336 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0410 22:14:53.719450   40336 cache.go:56] Caching tarball of preloaded images
	I0410 22:14:53.719522   40336 preload.go:173] Found /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0410 22:14:53.719533   40336 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0410 22:14:53.719645   40336 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/multinode-824789/config.json ...
	I0410 22:14:53.719850   40336 start.go:360] acquireMachinesLock for multinode-824789: {Name:mkcfcfa8b01b63726303cd37d33f01d452118635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0410 22:14:53.719914   40336 start.go:364] duration metric: took 38.55µs to acquireMachinesLock for "multinode-824789"
	I0410 22:14:53.719940   40336 start.go:96] Skipping create...Using existing machine configuration
	I0410 22:14:53.719956   40336 fix.go:54] fixHost starting: 
	I0410 22:14:53.720254   40336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:14:53.720340   40336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:14:53.734886   40336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44885
	I0410 22:14:53.735339   40336 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:14:53.735818   40336 main.go:141] libmachine: Using API Version  1
	I0410 22:14:53.735839   40336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:14:53.736189   40336 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:14:53.736371   40336 main.go:141] libmachine: (multinode-824789) Calling .DriverName
	I0410 22:14:53.736568   40336 main.go:141] libmachine: (multinode-824789) Calling .GetState
	I0410 22:14:53.738112   40336 fix.go:112] recreateIfNeeded on multinode-824789: state=Running err=<nil>
	W0410 22:14:53.738131   40336 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 22:14:53.740221   40336 out.go:177] * Updating the running kvm2 "multinode-824789" VM ...
	I0410 22:14:53.741533   40336 machine.go:94] provisionDockerMachine start ...
	I0410 22:14:53.741550   40336 main.go:141] libmachine: (multinode-824789) Calling .DriverName
	I0410 22:14:53.741746   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHHostname
	I0410 22:14:53.744122   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:14:53.744597   40336 main.go:141] libmachine: (multinode-824789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:10:8f", ip: ""} in network mk-multinode-824789: {Iface:virbr1 ExpiryTime:2024-04-10 23:09:54 +0000 UTC Type:0 Mac:52:54:00:af:10:8f Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-824789 Clientid:01:52:54:00:af:10:8f}
	I0410 22:14:53.744627   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined IP address 192.168.39.94 and MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:14:53.744766   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHPort
	I0410 22:14:53.744931   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHKeyPath
	I0410 22:14:53.745089   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHKeyPath
	I0410 22:14:53.745215   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHUsername
	I0410 22:14:53.745386   40336 main.go:141] libmachine: Using SSH client type: native
	I0410 22:14:53.745602   40336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0410 22:14:53.745615   40336 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 22:14:53.850406   40336 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-824789
	
	I0410 22:14:53.850429   40336 main.go:141] libmachine: (multinode-824789) Calling .GetMachineName
	I0410 22:14:53.850668   40336 buildroot.go:166] provisioning hostname "multinode-824789"
	I0410 22:14:53.850688   40336 main.go:141] libmachine: (multinode-824789) Calling .GetMachineName
	I0410 22:14:53.850865   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHHostname
	I0410 22:14:53.853722   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:14:53.854112   40336 main.go:141] libmachine: (multinode-824789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:10:8f", ip: ""} in network mk-multinode-824789: {Iface:virbr1 ExpiryTime:2024-04-10 23:09:54 +0000 UTC Type:0 Mac:52:54:00:af:10:8f Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-824789 Clientid:01:52:54:00:af:10:8f}
	I0410 22:14:53.854142   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined IP address 192.168.39.94 and MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:14:53.854292   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHPort
	I0410 22:14:53.854480   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHKeyPath
	I0410 22:14:53.854636   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHKeyPath
	I0410 22:14:53.854833   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHUsername
	I0410 22:14:53.855008   40336 main.go:141] libmachine: Using SSH client type: native
	I0410 22:14:53.855228   40336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0410 22:14:53.855248   40336 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-824789 && echo "multinode-824789" | sudo tee /etc/hostname
	I0410 22:14:53.973675   40336 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-824789
	
	I0410 22:14:53.973709   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHHostname
	I0410 22:14:53.976949   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:14:53.977427   40336 main.go:141] libmachine: (multinode-824789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:10:8f", ip: ""} in network mk-multinode-824789: {Iface:virbr1 ExpiryTime:2024-04-10 23:09:54 +0000 UTC Type:0 Mac:52:54:00:af:10:8f Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-824789 Clientid:01:52:54:00:af:10:8f}
	I0410 22:14:53.977469   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined IP address 192.168.39.94 and MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:14:53.977659   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHPort
	I0410 22:14:53.977864   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHKeyPath
	I0410 22:14:53.978041   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHKeyPath
	I0410 22:14:53.978214   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHUsername
	I0410 22:14:53.978423   40336 main.go:141] libmachine: Using SSH client type: native
	I0410 22:14:53.978620   40336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0410 22:14:53.978644   40336 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-824789' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-824789/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-824789' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:14:54.082096   40336 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:14:54.082126   40336 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:14:54.082159   40336 buildroot.go:174] setting up certificates
	I0410 22:14:54.082168   40336 provision.go:84] configureAuth start
	I0410 22:14:54.082176   40336 main.go:141] libmachine: (multinode-824789) Calling .GetMachineName
	I0410 22:14:54.082493   40336 main.go:141] libmachine: (multinode-824789) Calling .GetIP
	I0410 22:14:54.084770   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:14:54.085201   40336 main.go:141] libmachine: (multinode-824789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:10:8f", ip: ""} in network mk-multinode-824789: {Iface:virbr1 ExpiryTime:2024-04-10 23:09:54 +0000 UTC Type:0 Mac:52:54:00:af:10:8f Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-824789 Clientid:01:52:54:00:af:10:8f}
	I0410 22:14:54.085216   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined IP address 192.168.39.94 and MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:14:54.085389   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHHostname
	I0410 22:14:54.088049   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:14:54.088423   40336 main.go:141] libmachine: (multinode-824789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:10:8f", ip: ""} in network mk-multinode-824789: {Iface:virbr1 ExpiryTime:2024-04-10 23:09:54 +0000 UTC Type:0 Mac:52:54:00:af:10:8f Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-824789 Clientid:01:52:54:00:af:10:8f}
	I0410 22:14:54.088457   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined IP address 192.168.39.94 and MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:14:54.088586   40336 provision.go:143] copyHostCerts
	I0410 22:14:54.088628   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:14:54.088666   40336 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:14:54.088686   40336 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:14:54.088770   40336 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:14:54.088876   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:14:54.088902   40336 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:14:54.088912   40336 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:14:54.088955   40336 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:14:54.089030   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:14:54.089051   40336 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:14:54.089068   40336 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:14:54.089105   40336 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:14:54.089183   40336 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.multinode-824789 san=[127.0.0.1 192.168.39.94 localhost minikube multinode-824789]
	I0410 22:14:54.262659   40336 provision.go:177] copyRemoteCerts
	I0410 22:14:54.262718   40336 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:14:54.262740   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHHostname
	I0410 22:14:54.265429   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:14:54.265740   40336 main.go:141] libmachine: (multinode-824789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:10:8f", ip: ""} in network mk-multinode-824789: {Iface:virbr1 ExpiryTime:2024-04-10 23:09:54 +0000 UTC Type:0 Mac:52:54:00:af:10:8f Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-824789 Clientid:01:52:54:00:af:10:8f}
	I0410 22:14:54.265769   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined IP address 192.168.39.94 and MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:14:54.265990   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHPort
	I0410 22:14:54.266159   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHKeyPath
	I0410 22:14:54.266307   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHUsername
	I0410 22:14:54.266454   40336 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/multinode-824789/id_rsa Username:docker}
	I0410 22:14:54.350009   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0410 22:14:54.350086   40336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0410 22:14:54.377365   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0410 22:14:54.377473   40336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:14:54.405619   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0410 22:14:54.405697   40336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0410 22:14:54.432574   40336 provision.go:87] duration metric: took 350.395007ms to configureAuth
	I0410 22:14:54.432604   40336 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:14:54.432827   40336 config.go:182] Loaded profile config "multinode-824789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:14:54.432907   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHHostname
	I0410 22:14:54.435629   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:14:54.436034   40336 main.go:141] libmachine: (multinode-824789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:10:8f", ip: ""} in network mk-multinode-824789: {Iface:virbr1 ExpiryTime:2024-04-10 23:09:54 +0000 UTC Type:0 Mac:52:54:00:af:10:8f Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-824789 Clientid:01:52:54:00:af:10:8f}
	I0410 22:14:54.436061   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined IP address 192.168.39.94 and MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:14:54.436203   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHPort
	I0410 22:14:54.436435   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHKeyPath
	I0410 22:14:54.436610   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHKeyPath
	I0410 22:14:54.436733   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHUsername
	I0410 22:14:54.436949   40336 main.go:141] libmachine: Using SSH client type: native
	I0410 22:14:54.437150   40336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0410 22:14:54.437187   40336 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:16:25.258287   40336 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:16:25.258317   40336 machine.go:97] duration metric: took 1m31.516769992s to provisionDockerMachine
	I0410 22:16:25.258334   40336 start.go:293] postStartSetup for "multinode-824789" (driver="kvm2")
	I0410 22:16:25.258347   40336 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:16:25.258388   40336 main.go:141] libmachine: (multinode-824789) Calling .DriverName
	I0410 22:16:25.258738   40336 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:16:25.258773   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHHostname
	I0410 22:16:25.261516   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:16:25.261868   40336 main.go:141] libmachine: (multinode-824789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:10:8f", ip: ""} in network mk-multinode-824789: {Iface:virbr1 ExpiryTime:2024-04-10 23:09:54 +0000 UTC Type:0 Mac:52:54:00:af:10:8f Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-824789 Clientid:01:52:54:00:af:10:8f}
	I0410 22:16:25.261905   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined IP address 192.168.39.94 and MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:16:25.262090   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHPort
	I0410 22:16:25.262304   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHKeyPath
	I0410 22:16:25.262480   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHUsername
	I0410 22:16:25.262717   40336 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/multinode-824789/id_rsa Username:docker}
	I0410 22:16:25.345695   40336 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:16:25.350004   40336 command_runner.go:130] > NAME=Buildroot
	I0410 22:16:25.350024   40336 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0410 22:16:25.350028   40336 command_runner.go:130] > ID=buildroot
	I0410 22:16:25.350040   40336 command_runner.go:130] > VERSION_ID=2023.02.9
	I0410 22:16:25.350045   40336 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0410 22:16:25.350070   40336 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:16:25.350083   40336 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:16:25.350142   40336 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:16:25.350232   40336 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:16:25.350245   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> /etc/ssl/certs/130012.pem
	I0410 22:16:25.350335   40336 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:16:25.360546   40336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:16:25.386152   40336 start.go:296] duration metric: took 127.805194ms for postStartSetup
	I0410 22:16:25.386193   40336 fix.go:56] duration metric: took 1m31.666243462s for fixHost
	I0410 22:16:25.386211   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHHostname
	I0410 22:16:25.388883   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:16:25.389194   40336 main.go:141] libmachine: (multinode-824789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:10:8f", ip: ""} in network mk-multinode-824789: {Iface:virbr1 ExpiryTime:2024-04-10 23:09:54 +0000 UTC Type:0 Mac:52:54:00:af:10:8f Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-824789 Clientid:01:52:54:00:af:10:8f}
	I0410 22:16:25.389227   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined IP address 192.168.39.94 and MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:16:25.389358   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHPort
	I0410 22:16:25.389557   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHKeyPath
	I0410 22:16:25.389721   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHKeyPath
	I0410 22:16:25.389877   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHUsername
	I0410 22:16:25.390030   40336 main.go:141] libmachine: Using SSH client type: native
	I0410 22:16:25.390236   40336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0410 22:16:25.390248   40336 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 22:16:25.493429   40336 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712787385.473870702
	
	I0410 22:16:25.493475   40336 fix.go:216] guest clock: 1712787385.473870702
	I0410 22:16:25.493488   40336 fix.go:229] Guest: 2024-04-10 22:16:25.473870702 +0000 UTC Remote: 2024-04-10 22:16:25.386196463 +0000 UTC m=+91.804764720 (delta=87.674239ms)
	I0410 22:16:25.493551   40336 fix.go:200] guest clock delta is within tolerance: 87.674239ms
	I0410 22:16:25.493559   40336 start.go:83] releasing machines lock for "multinode-824789", held for 1m31.773630625s
	I0410 22:16:25.493592   40336 main.go:141] libmachine: (multinode-824789) Calling .DriverName
	I0410 22:16:25.493892   40336 main.go:141] libmachine: (multinode-824789) Calling .GetIP
	I0410 22:16:25.496612   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:16:25.496985   40336 main.go:141] libmachine: (multinode-824789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:10:8f", ip: ""} in network mk-multinode-824789: {Iface:virbr1 ExpiryTime:2024-04-10 23:09:54 +0000 UTC Type:0 Mac:52:54:00:af:10:8f Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-824789 Clientid:01:52:54:00:af:10:8f}
	I0410 22:16:25.497023   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined IP address 192.168.39.94 and MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:16:25.497153   40336 main.go:141] libmachine: (multinode-824789) Calling .DriverName
	I0410 22:16:25.497581   40336 main.go:141] libmachine: (multinode-824789) Calling .DriverName
	I0410 22:16:25.497777   40336 main.go:141] libmachine: (multinode-824789) Calling .DriverName
	I0410 22:16:25.497864   40336 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:16:25.497902   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHHostname
	I0410 22:16:25.498028   40336 ssh_runner.go:195] Run: cat /version.json
	I0410 22:16:25.498050   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHHostname
	I0410 22:16:25.500762   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:16:25.501134   40336 main.go:141] libmachine: (multinode-824789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:10:8f", ip: ""} in network mk-multinode-824789: {Iface:virbr1 ExpiryTime:2024-04-10 23:09:54 +0000 UTC Type:0 Mac:52:54:00:af:10:8f Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-824789 Clientid:01:52:54:00:af:10:8f}
	I0410 22:16:25.501160   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined IP address 192.168.39.94 and MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:16:25.501180   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:16:25.501297   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHPort
	I0410 22:16:25.501498   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHKeyPath
	I0410 22:16:25.501624   40336 main.go:141] libmachine: (multinode-824789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:10:8f", ip: ""} in network mk-multinode-824789: {Iface:virbr1 ExpiryTime:2024-04-10 23:09:54 +0000 UTC Type:0 Mac:52:54:00:af:10:8f Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-824789 Clientid:01:52:54:00:af:10:8f}
	I0410 22:16:25.501636   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHUsername
	I0410 22:16:25.501645   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined IP address 192.168.39.94 and MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:16:25.501801   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHPort
	I0410 22:16:25.501809   40336 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/multinode-824789/id_rsa Username:docker}
	I0410 22:16:25.501982   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHKeyPath
	I0410 22:16:25.502137   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHUsername
	I0410 22:16:25.502319   40336 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/multinode-824789/id_rsa Username:docker}
	I0410 22:16:25.609848   40336 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0410 22:16:25.610640   40336 command_runner.go:130] > {"iso_version": "v1.33.0-1712743565-18610", "kicbase_version": "v0.0.43-1712593525-18585", "minikube_version": "v1.33.0-beta.0", "commit": "c0a429c696190f9570e438712701fdb5e36a248a"}
	I0410 22:16:25.610778   40336 ssh_runner.go:195] Run: systemctl --version
	I0410 22:16:25.616958   40336 command_runner.go:130] > systemd 252 (252)
	I0410 22:16:25.617006   40336 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0410 22:16:25.617204   40336 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:16:25.784844   40336 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0410 22:16:25.792819   40336 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0410 22:16:25.793319   40336 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:16:25.793384   40336 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:16:25.803494   40336 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0410 22:16:25.803518   40336 start.go:494] detecting cgroup driver to use...
	I0410 22:16:25.803590   40336 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:16:25.821232   40336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:16:25.836293   40336 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:16:25.836369   40336 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:16:25.850632   40336 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:16:25.865087   40336 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:16:26.014680   40336 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:16:26.163698   40336 docker.go:233] disabling docker service ...
	I0410 22:16:26.163757   40336 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:16:26.180916   40336 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:16:26.195108   40336 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:16:26.346942   40336 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:16:26.498431   40336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:16:26.514013   40336 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:16:26.534278   40336 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0410 22:16:26.534326   40336 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0410 22:16:26.534377   40336 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:16:26.545430   40336 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:16:26.545501   40336 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:16:26.555868   40336 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:16:26.566665   40336 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:16:26.577249   40336 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:16:26.588157   40336 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:16:26.598959   40336 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:16:26.611855   40336 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:16:26.624189   40336 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:16:26.634648   40336 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0410 22:16:26.634708   40336 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:16:26.645184   40336 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:16:26.790860   40336 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:16:27.040333   40336 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 22:16:27.040423   40336 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 22:16:27.045863   40336 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0410 22:16:27.045884   40336 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0410 22:16:27.045917   40336 command_runner.go:130] > Device: 0,22	Inode: 1323        Links: 1
	I0410 22:16:27.045930   40336 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0410 22:16:27.045935   40336 command_runner.go:130] > Access: 2024-04-10 22:16:26.913249186 +0000
	I0410 22:16:27.045943   40336 command_runner.go:130] > Modify: 2024-04-10 22:16:26.913249186 +0000
	I0410 22:16:27.045950   40336 command_runner.go:130] > Change: 2024-04-10 22:16:26.913249186 +0000
	I0410 22:16:27.045957   40336 command_runner.go:130] >  Birth: -
	I0410 22:16:27.046032   40336 start.go:562] Will wait 60s for crictl version
	I0410 22:16:27.046111   40336 ssh_runner.go:195] Run: which crictl
	I0410 22:16:27.049899   40336 command_runner.go:130] > /usr/bin/crictl
	I0410 22:16:27.050043   40336 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 22:16:27.090666   40336 command_runner.go:130] > Version:  0.1.0
	I0410 22:16:27.090692   40336 command_runner.go:130] > RuntimeName:  cri-o
	I0410 22:16:27.090696   40336 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0410 22:16:27.090703   40336 command_runner.go:130] > RuntimeApiVersion:  v1
	I0410 22:16:27.091682   40336 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 22:16:27.091743   40336 ssh_runner.go:195] Run: crio --version
	I0410 22:16:27.127223   40336 command_runner.go:130] > crio version 1.29.1
	I0410 22:16:27.127251   40336 command_runner.go:130] > Version:        1.29.1
	I0410 22:16:27.127259   40336 command_runner.go:130] > GitCommit:      unknown
	I0410 22:16:27.127264   40336 command_runner.go:130] > GitCommitDate:  unknown
	I0410 22:16:27.127269   40336 command_runner.go:130] > GitTreeState:   clean
	I0410 22:16:27.127276   40336 command_runner.go:130] > BuildDate:      2024-04-10T15:40:24Z
	I0410 22:16:27.127282   40336 command_runner.go:130] > GoVersion:      go1.21.6
	I0410 22:16:27.127288   40336 command_runner.go:130] > Compiler:       gc
	I0410 22:16:27.127295   40336 command_runner.go:130] > Platform:       linux/amd64
	I0410 22:16:27.127301   40336 command_runner.go:130] > Linkmode:       dynamic
	I0410 22:16:27.127308   40336 command_runner.go:130] > BuildTags:      
	I0410 22:16:27.127314   40336 command_runner.go:130] >   containers_image_ostree_stub
	I0410 22:16:27.127320   40336 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0410 22:16:27.127338   40336 command_runner.go:130] >   btrfs_noversion
	I0410 22:16:27.127349   40336 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0410 22:16:27.127357   40336 command_runner.go:130] >   libdm_no_deferred_remove
	I0410 22:16:27.127362   40336 command_runner.go:130] >   seccomp
	I0410 22:16:27.127372   40336 command_runner.go:130] > LDFlags:          unknown
	I0410 22:16:27.127379   40336 command_runner.go:130] > SeccompEnabled:   true
	I0410 22:16:27.127386   40336 command_runner.go:130] > AppArmorEnabled:  false
	I0410 22:16:27.127477   40336 ssh_runner.go:195] Run: crio --version
	I0410 22:16:27.155799   40336 command_runner.go:130] > crio version 1.29.1
	I0410 22:16:27.155821   40336 command_runner.go:130] > Version:        1.29.1
	I0410 22:16:27.155826   40336 command_runner.go:130] > GitCommit:      unknown
	I0410 22:16:27.155830   40336 command_runner.go:130] > GitCommitDate:  unknown
	I0410 22:16:27.155851   40336 command_runner.go:130] > GitTreeState:   clean
	I0410 22:16:27.155857   40336 command_runner.go:130] > BuildDate:      2024-04-10T15:40:24Z
	I0410 22:16:27.155861   40336 command_runner.go:130] > GoVersion:      go1.21.6
	I0410 22:16:27.155865   40336 command_runner.go:130] > Compiler:       gc
	I0410 22:16:27.155869   40336 command_runner.go:130] > Platform:       linux/amd64
	I0410 22:16:27.155873   40336 command_runner.go:130] > Linkmode:       dynamic
	I0410 22:16:27.155878   40336 command_runner.go:130] > BuildTags:      
	I0410 22:16:27.155882   40336 command_runner.go:130] >   containers_image_ostree_stub
	I0410 22:16:27.155886   40336 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0410 22:16:27.155890   40336 command_runner.go:130] >   btrfs_noversion
	I0410 22:16:27.155894   40336 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0410 22:16:27.155898   40336 command_runner.go:130] >   libdm_no_deferred_remove
	I0410 22:16:27.155905   40336 command_runner.go:130] >   seccomp
	I0410 22:16:27.155909   40336 command_runner.go:130] > LDFlags:          unknown
	I0410 22:16:27.155914   40336 command_runner.go:130] > SeccompEnabled:   true
	I0410 22:16:27.155918   40336 command_runner.go:130] > AppArmorEnabled:  false
	I0410 22:16:27.160443   40336 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0410 22:16:27.162094   40336 main.go:141] libmachine: (multinode-824789) Calling .GetIP
	I0410 22:16:27.164700   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:16:27.165017   40336 main.go:141] libmachine: (multinode-824789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:10:8f", ip: ""} in network mk-multinode-824789: {Iface:virbr1 ExpiryTime:2024-04-10 23:09:54 +0000 UTC Type:0 Mac:52:54:00:af:10:8f Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-824789 Clientid:01:52:54:00:af:10:8f}
	I0410 22:16:27.165049   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined IP address 192.168.39.94 and MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:16:27.165241   40336 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0410 22:16:27.169579   40336 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0410 22:16:27.169759   40336 kubeadm.go:877] updating cluster {Name:multinode-824789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
29.3 ClusterName:multinode-824789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.85 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.224 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 22:16:27.169890   40336 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 22:16:27.169930   40336 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:16:27.214772   40336 command_runner.go:130] > {
	I0410 22:16:27.214799   40336 command_runner.go:130] >   "images": [
	I0410 22:16:27.214804   40336 command_runner.go:130] >     {
	I0410 22:16:27.214816   40336 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0410 22:16:27.214822   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.214833   40336 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0410 22:16:27.214838   40336 command_runner.go:130] >       ],
	I0410 22:16:27.214843   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.214856   40336 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0410 22:16:27.214874   40336 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0410 22:16:27.214880   40336 command_runner.go:130] >       ],
	I0410 22:16:27.214886   40336 command_runner.go:130] >       "size": "65291810",
	I0410 22:16:27.214892   40336 command_runner.go:130] >       "uid": null,
	I0410 22:16:27.214899   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.214916   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.214923   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.214927   40336 command_runner.go:130] >     },
	I0410 22:16:27.214930   40336 command_runner.go:130] >     {
	I0410 22:16:27.214936   40336 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0410 22:16:27.214951   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.214962   40336 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0410 22:16:27.214969   40336 command_runner.go:130] >       ],
	I0410 22:16:27.214973   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.214980   40336 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0410 22:16:27.214989   40336 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0410 22:16:27.214992   40336 command_runner.go:130] >       ],
	I0410 22:16:27.214997   40336 command_runner.go:130] >       "size": "1363676",
	I0410 22:16:27.215003   40336 command_runner.go:130] >       "uid": null,
	I0410 22:16:27.215010   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.215016   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.215020   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.215024   40336 command_runner.go:130] >     },
	I0410 22:16:27.215028   40336 command_runner.go:130] >     {
	I0410 22:16:27.215033   40336 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0410 22:16:27.215038   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.215043   40336 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0410 22:16:27.215049   40336 command_runner.go:130] >       ],
	I0410 22:16:27.215053   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.215063   40336 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0410 22:16:27.215073   40336 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0410 22:16:27.215078   40336 command_runner.go:130] >       ],
	I0410 22:16:27.215082   40336 command_runner.go:130] >       "size": "31470524",
	I0410 22:16:27.215087   40336 command_runner.go:130] >       "uid": null,
	I0410 22:16:27.215093   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.215097   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.215101   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.215104   40336 command_runner.go:130] >     },
	I0410 22:16:27.215107   40336 command_runner.go:130] >     {
	I0410 22:16:27.215113   40336 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0410 22:16:27.215119   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.215123   40336 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0410 22:16:27.215129   40336 command_runner.go:130] >       ],
	I0410 22:16:27.215133   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.215140   40336 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0410 22:16:27.215176   40336 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0410 22:16:27.215190   40336 command_runner.go:130] >       ],
	I0410 22:16:27.215194   40336 command_runner.go:130] >       "size": "61245718",
	I0410 22:16:27.215198   40336 command_runner.go:130] >       "uid": null,
	I0410 22:16:27.215204   40336 command_runner.go:130] >       "username": "nonroot",
	I0410 22:16:27.215208   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.215216   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.215220   40336 command_runner.go:130] >     },
	I0410 22:16:27.215223   40336 command_runner.go:130] >     {
	I0410 22:16:27.215230   40336 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0410 22:16:27.215236   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.215241   40336 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0410 22:16:27.215247   40336 command_runner.go:130] >       ],
	I0410 22:16:27.215252   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.215259   40336 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0410 22:16:27.215268   40336 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0410 22:16:27.215271   40336 command_runner.go:130] >       ],
	I0410 22:16:27.215276   40336 command_runner.go:130] >       "size": "150779692",
	I0410 22:16:27.215282   40336 command_runner.go:130] >       "uid": {
	I0410 22:16:27.215286   40336 command_runner.go:130] >         "value": "0"
	I0410 22:16:27.215289   40336 command_runner.go:130] >       },
	I0410 22:16:27.215293   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.215297   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.215301   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.215307   40336 command_runner.go:130] >     },
	I0410 22:16:27.215310   40336 command_runner.go:130] >     {
	I0410 22:16:27.215316   40336 command_runner.go:130] >       "id": "39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533",
	I0410 22:16:27.215320   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.215326   40336 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.29.3"
	I0410 22:16:27.215333   40336 command_runner.go:130] >       ],
	I0410 22:16:27.215337   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.215344   40336 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322",
	I0410 22:16:27.215354   40336 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"
	I0410 22:16:27.215357   40336 command_runner.go:130] >       ],
	I0410 22:16:27.215361   40336 command_runner.go:130] >       "size": "128508878",
	I0410 22:16:27.215367   40336 command_runner.go:130] >       "uid": {
	I0410 22:16:27.215371   40336 command_runner.go:130] >         "value": "0"
	I0410 22:16:27.215379   40336 command_runner.go:130] >       },
	I0410 22:16:27.215385   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.215389   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.215395   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.215398   40336 command_runner.go:130] >     },
	I0410 22:16:27.215401   40336 command_runner.go:130] >     {
	I0410 22:16:27.215407   40336 command_runner.go:130] >       "id": "6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3",
	I0410 22:16:27.215412   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.215417   40336 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.29.3"
	I0410 22:16:27.215420   40336 command_runner.go:130] >       ],
	I0410 22:16:27.215424   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.215432   40336 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606",
	I0410 22:16:27.215442   40336 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"
	I0410 22:16:27.215445   40336 command_runner.go:130] >       ],
	I0410 22:16:27.215449   40336 command_runner.go:130] >       "size": "123142962",
	I0410 22:16:27.215452   40336 command_runner.go:130] >       "uid": {
	I0410 22:16:27.215458   40336 command_runner.go:130] >         "value": "0"
	I0410 22:16:27.215463   40336 command_runner.go:130] >       },
	I0410 22:16:27.215467   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.215473   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.215477   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.215480   40336 command_runner.go:130] >     },
	I0410 22:16:27.215483   40336 command_runner.go:130] >     {
	I0410 22:16:27.215489   40336 command_runner.go:130] >       "id": "a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392",
	I0410 22:16:27.215495   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.215500   40336 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.29.3"
	I0410 22:16:27.215506   40336 command_runner.go:130] >       ],
	I0410 22:16:27.215510   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.215530   40336 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d",
	I0410 22:16:27.215540   40336 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"
	I0410 22:16:27.215543   40336 command_runner.go:130] >       ],
	I0410 22:16:27.215547   40336 command_runner.go:130] >       "size": "83634073",
	I0410 22:16:27.215553   40336 command_runner.go:130] >       "uid": null,
	I0410 22:16:27.215556   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.215560   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.215564   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.215571   40336 command_runner.go:130] >     },
	I0410 22:16:27.215574   40336 command_runner.go:130] >     {
	I0410 22:16:27.215580   40336 command_runner.go:130] >       "id": "8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b",
	I0410 22:16:27.215584   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.215588   40336 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.29.3"
	I0410 22:16:27.215591   40336 command_runner.go:130] >       ],
	I0410 22:16:27.215595   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.215602   40336 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a",
	I0410 22:16:27.215609   40336 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"
	I0410 22:16:27.215616   40336 command_runner.go:130] >       ],
	I0410 22:16:27.215620   40336 command_runner.go:130] >       "size": "60724018",
	I0410 22:16:27.215623   40336 command_runner.go:130] >       "uid": {
	I0410 22:16:27.215627   40336 command_runner.go:130] >         "value": "0"
	I0410 22:16:27.215630   40336 command_runner.go:130] >       },
	I0410 22:16:27.215635   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.215639   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.215643   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.215646   40336 command_runner.go:130] >     },
	I0410 22:16:27.215650   40336 command_runner.go:130] >     {
	I0410 22:16:27.215656   40336 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0410 22:16:27.215662   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.215666   40336 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0410 22:16:27.215670   40336 command_runner.go:130] >       ],
	I0410 22:16:27.215674   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.215681   40336 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0410 22:16:27.215690   40336 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0410 22:16:27.215695   40336 command_runner.go:130] >       ],
	I0410 22:16:27.215699   40336 command_runner.go:130] >       "size": "750414",
	I0410 22:16:27.215705   40336 command_runner.go:130] >       "uid": {
	I0410 22:16:27.215709   40336 command_runner.go:130] >         "value": "65535"
	I0410 22:16:27.215712   40336 command_runner.go:130] >       },
	I0410 22:16:27.215718   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.215722   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.215728   40336 command_runner.go:130] >       "pinned": true
	I0410 22:16:27.215731   40336 command_runner.go:130] >     }
	I0410 22:16:27.215734   40336 command_runner.go:130] >   ]
	I0410 22:16:27.215742   40336 command_runner.go:130] > }
	I0410 22:16:27.215895   40336 crio.go:514] all images are preloaded for cri-o runtime.
	I0410 22:16:27.215905   40336 crio.go:433] Images already preloaded, skipping extraction
	I0410 22:16:27.215947   40336 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:16:27.255788   40336 command_runner.go:130] > {
	I0410 22:16:27.255807   40336 command_runner.go:130] >   "images": [
	I0410 22:16:27.255811   40336 command_runner.go:130] >     {
	I0410 22:16:27.255819   40336 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0410 22:16:27.255824   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.255833   40336 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0410 22:16:27.255838   40336 command_runner.go:130] >       ],
	I0410 22:16:27.255845   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.255858   40336 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0410 22:16:27.255868   40336 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0410 22:16:27.255873   40336 command_runner.go:130] >       ],
	I0410 22:16:27.255880   40336 command_runner.go:130] >       "size": "65291810",
	I0410 22:16:27.255890   40336 command_runner.go:130] >       "uid": null,
	I0410 22:16:27.255895   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.255923   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.255936   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.255941   40336 command_runner.go:130] >     },
	I0410 22:16:27.255947   40336 command_runner.go:130] >     {
	I0410 22:16:27.255960   40336 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0410 22:16:27.255965   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.255971   40336 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0410 22:16:27.255975   40336 command_runner.go:130] >       ],
	I0410 22:16:27.255979   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.255986   40336 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0410 22:16:27.255993   40336 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0410 22:16:27.255996   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256000   40336 command_runner.go:130] >       "size": "1363676",
	I0410 22:16:27.256004   40336 command_runner.go:130] >       "uid": null,
	I0410 22:16:27.256011   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.256014   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.256018   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.256023   40336 command_runner.go:130] >     },
	I0410 22:16:27.256027   40336 command_runner.go:130] >     {
	I0410 22:16:27.256033   40336 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0410 22:16:27.256038   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.256043   40336 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0410 22:16:27.256047   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256051   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.256059   40336 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0410 22:16:27.256067   40336 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0410 22:16:27.256087   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256099   40336 command_runner.go:130] >       "size": "31470524",
	I0410 22:16:27.256104   40336 command_runner.go:130] >       "uid": null,
	I0410 22:16:27.256108   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.256111   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.256115   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.256121   40336 command_runner.go:130] >     },
	I0410 22:16:27.256125   40336 command_runner.go:130] >     {
	I0410 22:16:27.256130   40336 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0410 22:16:27.256134   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.256139   40336 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0410 22:16:27.256145   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256149   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.256155   40336 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0410 22:16:27.256167   40336 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0410 22:16:27.256171   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256175   40336 command_runner.go:130] >       "size": "61245718",
	I0410 22:16:27.256179   40336 command_runner.go:130] >       "uid": null,
	I0410 22:16:27.256183   40336 command_runner.go:130] >       "username": "nonroot",
	I0410 22:16:27.256191   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.256195   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.256198   40336 command_runner.go:130] >     },
	I0410 22:16:27.256201   40336 command_runner.go:130] >     {
	I0410 22:16:27.256207   40336 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0410 22:16:27.256212   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.256216   40336 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0410 22:16:27.256219   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256223   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.256234   40336 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0410 22:16:27.256240   40336 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0410 22:16:27.256246   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256250   40336 command_runner.go:130] >       "size": "150779692",
	I0410 22:16:27.256254   40336 command_runner.go:130] >       "uid": {
	I0410 22:16:27.256257   40336 command_runner.go:130] >         "value": "0"
	I0410 22:16:27.256261   40336 command_runner.go:130] >       },
	I0410 22:16:27.256265   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.256270   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.256274   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.256277   40336 command_runner.go:130] >     },
	I0410 22:16:27.256280   40336 command_runner.go:130] >     {
	I0410 22:16:27.256288   40336 command_runner.go:130] >       "id": "39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533",
	I0410 22:16:27.256292   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.256297   40336 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.29.3"
	I0410 22:16:27.256300   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256304   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.256311   40336 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322",
	I0410 22:16:27.256318   40336 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"
	I0410 22:16:27.256324   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256328   40336 command_runner.go:130] >       "size": "128508878",
	I0410 22:16:27.256331   40336 command_runner.go:130] >       "uid": {
	I0410 22:16:27.256337   40336 command_runner.go:130] >         "value": "0"
	I0410 22:16:27.256340   40336 command_runner.go:130] >       },
	I0410 22:16:27.256344   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.256348   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.256352   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.256355   40336 command_runner.go:130] >     },
	I0410 22:16:27.256358   40336 command_runner.go:130] >     {
	I0410 22:16:27.256364   40336 command_runner.go:130] >       "id": "6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3",
	I0410 22:16:27.256368   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.256373   40336 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.29.3"
	I0410 22:16:27.256379   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256383   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.256393   40336 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606",
	I0410 22:16:27.256414   40336 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"
	I0410 22:16:27.256423   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256427   40336 command_runner.go:130] >       "size": "123142962",
	I0410 22:16:27.256431   40336 command_runner.go:130] >       "uid": {
	I0410 22:16:27.256436   40336 command_runner.go:130] >         "value": "0"
	I0410 22:16:27.256441   40336 command_runner.go:130] >       },
	I0410 22:16:27.256445   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.256449   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.256460   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.256466   40336 command_runner.go:130] >     },
	I0410 22:16:27.256469   40336 command_runner.go:130] >     {
	I0410 22:16:27.256475   40336 command_runner.go:130] >       "id": "a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392",
	I0410 22:16:27.256481   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.256486   40336 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.29.3"
	I0410 22:16:27.256492   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256496   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.256510   40336 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d",
	I0410 22:16:27.256520   40336 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"
	I0410 22:16:27.256523   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256527   40336 command_runner.go:130] >       "size": "83634073",
	I0410 22:16:27.256531   40336 command_runner.go:130] >       "uid": null,
	I0410 22:16:27.256535   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.256539   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.256543   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.256546   40336 command_runner.go:130] >     },
	I0410 22:16:27.256549   40336 command_runner.go:130] >     {
	I0410 22:16:27.256555   40336 command_runner.go:130] >       "id": "8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b",
	I0410 22:16:27.256561   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.256567   40336 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.29.3"
	I0410 22:16:27.256572   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256577   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.256584   40336 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a",
	I0410 22:16:27.256593   40336 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"
	I0410 22:16:27.256597   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256601   40336 command_runner.go:130] >       "size": "60724018",
	I0410 22:16:27.256607   40336 command_runner.go:130] >       "uid": {
	I0410 22:16:27.256610   40336 command_runner.go:130] >         "value": "0"
	I0410 22:16:27.256618   40336 command_runner.go:130] >       },
	I0410 22:16:27.256624   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.256628   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.256634   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.256637   40336 command_runner.go:130] >     },
	I0410 22:16:27.256640   40336 command_runner.go:130] >     {
	I0410 22:16:27.256646   40336 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0410 22:16:27.256652   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.256657   40336 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0410 22:16:27.256661   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256665   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.256674   40336 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0410 22:16:27.256683   40336 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0410 22:16:27.256689   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256693   40336 command_runner.go:130] >       "size": "750414",
	I0410 22:16:27.256696   40336 command_runner.go:130] >       "uid": {
	I0410 22:16:27.256700   40336 command_runner.go:130] >         "value": "65535"
	I0410 22:16:27.256704   40336 command_runner.go:130] >       },
	I0410 22:16:27.256708   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.256711   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.256715   40336 command_runner.go:130] >       "pinned": true
	I0410 22:16:27.256718   40336 command_runner.go:130] >     }
	I0410 22:16:27.256721   40336 command_runner.go:130] >   ]
	I0410 22:16:27.256726   40336 command_runner.go:130] > }
	I0410 22:16:27.256824   40336 crio.go:514] all images are preloaded for cri-o runtime.
	I0410 22:16:27.256834   40336 cache_images.go:84] Images are preloaded, skipping loading
	I0410 22:16:27.256843   40336 kubeadm.go:928] updating node { 192.168.39.94 8443 v1.29.3 crio true true} ...
	I0410 22:16:27.256938   40336 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-824789 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-824789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 22:16:27.256997   40336 ssh_runner.go:195] Run: crio config
	I0410 22:16:27.309416   40336 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0410 22:16:27.309444   40336 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0410 22:16:27.309454   40336 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0410 22:16:27.309459   40336 command_runner.go:130] > #
	I0410 22:16:27.309469   40336 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0410 22:16:27.309478   40336 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0410 22:16:27.309489   40336 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0410 22:16:27.309499   40336 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0410 22:16:27.309512   40336 command_runner.go:130] > # reload'.
	I0410 22:16:27.309521   40336 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0410 22:16:27.309538   40336 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0410 22:16:27.309549   40336 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0410 22:16:27.309563   40336 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0410 22:16:27.309571   40336 command_runner.go:130] > [crio]
	I0410 22:16:27.309580   40336 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0410 22:16:27.309587   40336 command_runner.go:130] > # containers images, in this directory.
	I0410 22:16:27.309595   40336 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0410 22:16:27.309611   40336 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0410 22:16:27.309659   40336 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0410 22:16:27.309680   40336 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0410 22:16:27.309892   40336 command_runner.go:130] > # imagestore = ""
	I0410 22:16:27.309909   40336 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0410 22:16:27.309919   40336 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0410 22:16:27.310020   40336 command_runner.go:130] > storage_driver = "overlay"
	I0410 22:16:27.310037   40336 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0410 22:16:27.310047   40336 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0410 22:16:27.310056   40336 command_runner.go:130] > storage_option = [
	I0410 22:16:27.310194   40336 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0410 22:16:27.310320   40336 command_runner.go:130] > ]
	I0410 22:16:27.310340   40336 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0410 22:16:27.310351   40336 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0410 22:16:27.310604   40336 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0410 22:16:27.310616   40336 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0410 22:16:27.310622   40336 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0410 22:16:27.310627   40336 command_runner.go:130] > # always happen on a node reboot
	I0410 22:16:27.310904   40336 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0410 22:16:27.310932   40336 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0410 22:16:27.310944   40336 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0410 22:16:27.310953   40336 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0410 22:16:27.311044   40336 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0410 22:16:27.311057   40336 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0410 22:16:27.311065   40336 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0410 22:16:27.311384   40336 command_runner.go:130] > # internal_wipe = true
	I0410 22:16:27.311396   40336 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0410 22:16:27.311401   40336 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0410 22:16:27.311687   40336 command_runner.go:130] > # internal_repair = false
	I0410 22:16:27.311704   40336 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0410 22:16:27.311714   40336 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0410 22:16:27.311725   40336 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0410 22:16:27.311980   40336 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0410 22:16:27.311991   40336 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0410 22:16:27.312000   40336 command_runner.go:130] > [crio.api]
	I0410 22:16:27.312006   40336 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0410 22:16:27.312631   40336 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0410 22:16:27.312646   40336 command_runner.go:130] > # IP address on which the stream server will listen.
	I0410 22:16:27.312651   40336 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0410 22:16:27.312657   40336 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0410 22:16:27.312662   40336 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0410 22:16:27.312666   40336 command_runner.go:130] > # stream_port = "0"
	I0410 22:16:27.312672   40336 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0410 22:16:27.312676   40336 command_runner.go:130] > # stream_enable_tls = false
	I0410 22:16:27.312684   40336 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0410 22:16:27.312688   40336 command_runner.go:130] > # stream_idle_timeout = ""
	I0410 22:16:27.312700   40336 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0410 22:16:27.312707   40336 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0410 22:16:27.312710   40336 command_runner.go:130] > # minutes.
	I0410 22:16:27.312715   40336 command_runner.go:130] > # stream_tls_cert = ""
	I0410 22:16:27.312720   40336 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0410 22:16:27.312726   40336 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0410 22:16:27.312733   40336 command_runner.go:130] > # stream_tls_key = ""
	I0410 22:16:27.312738   40336 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0410 22:16:27.312744   40336 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0410 22:16:27.312761   40336 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0410 22:16:27.312772   40336 command_runner.go:130] > # stream_tls_ca = ""
	I0410 22:16:27.312783   40336 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0410 22:16:27.312791   40336 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0410 22:16:27.312803   40336 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0410 22:16:27.312811   40336 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0410 22:16:27.312817   40336 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0410 22:16:27.312823   40336 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0410 22:16:27.312830   40336 command_runner.go:130] > [crio.runtime]
	I0410 22:16:27.312840   40336 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0410 22:16:27.312852   40336 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0410 22:16:27.312861   40336 command_runner.go:130] > # "nofile=1024:2048"
	I0410 22:16:27.312871   40336 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0410 22:16:27.312878   40336 command_runner.go:130] > # default_ulimits = [
	I0410 22:16:27.312881   40336 command_runner.go:130] > # ]
	I0410 22:16:27.312887   40336 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0410 22:16:27.312895   40336 command_runner.go:130] > # no_pivot = false
	I0410 22:16:27.312904   40336 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0410 22:16:27.312918   40336 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0410 22:16:27.312930   40336 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0410 22:16:27.312943   40336 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0410 22:16:27.312951   40336 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0410 22:16:27.312959   40336 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0410 22:16:27.312967   40336 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0410 22:16:27.312971   40336 command_runner.go:130] > # Cgroup setting for conmon
	I0410 22:16:27.312980   40336 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0410 22:16:27.312987   40336 command_runner.go:130] > conmon_cgroup = "pod"
	I0410 22:16:27.313000   40336 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0410 22:16:27.313014   40336 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0410 22:16:27.313029   40336 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0410 22:16:27.313038   40336 command_runner.go:130] > conmon_env = [
	I0410 22:16:27.313047   40336 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0410 22:16:27.313053   40336 command_runner.go:130] > ]
	I0410 22:16:27.313058   40336 command_runner.go:130] > # Additional environment variables to set for all the
	I0410 22:16:27.313063   40336 command_runner.go:130] > # containers. These are overridden if set in the
	I0410 22:16:27.313075   40336 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0410 22:16:27.313081   40336 command_runner.go:130] > # default_env = [
	I0410 22:16:27.313090   40336 command_runner.go:130] > # ]
	I0410 22:16:27.313099   40336 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0410 22:16:27.313116   40336 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0410 22:16:27.313125   40336 command_runner.go:130] > # selinux = false
	I0410 22:16:27.313136   40336 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0410 22:16:27.313149   40336 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0410 22:16:27.313161   40336 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0410 22:16:27.313169   40336 command_runner.go:130] > # seccomp_profile = ""
	I0410 22:16:27.313176   40336 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0410 22:16:27.313188   40336 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0410 22:16:27.313201   40336 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0410 22:16:27.313210   40336 command_runner.go:130] > # which might increase security.
	I0410 22:16:27.313221   40336 command_runner.go:130] > # This option is currently deprecated,
	I0410 22:16:27.313230   40336 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0410 22:16:27.313241   40336 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0410 22:16:27.313255   40336 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0410 22:16:27.313266   40336 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0410 22:16:27.313276   40336 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0410 22:16:27.313289   40336 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0410 22:16:27.313302   40336 command_runner.go:130] > # This option supports live configuration reload.
	I0410 22:16:27.313313   40336 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0410 22:16:27.313326   40336 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0410 22:16:27.313332   40336 command_runner.go:130] > # the cgroup blockio controller.
	I0410 22:16:27.313342   40336 command_runner.go:130] > # blockio_config_file = ""
	I0410 22:16:27.313351   40336 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0410 22:16:27.313359   40336 command_runner.go:130] > # blockio parameters.
	I0410 22:16:27.313366   40336 command_runner.go:130] > # blockio_reload = false
	I0410 22:16:27.313379   40336 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0410 22:16:27.313389   40336 command_runner.go:130] > # irqbalance daemon.
	I0410 22:16:27.313400   40336 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0410 22:16:27.313413   40336 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0410 22:16:27.313427   40336 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0410 22:16:27.313441   40336 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0410 22:16:27.313458   40336 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0410 22:16:27.313469   40336 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0410 22:16:27.313479   40336 command_runner.go:130] > # This option supports live configuration reload.
	I0410 22:16:27.313489   40336 command_runner.go:130] > # rdt_config_file = ""
	I0410 22:16:27.313501   40336 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0410 22:16:27.313513   40336 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0410 22:16:27.313560   40336 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0410 22:16:27.313572   40336 command_runner.go:130] > # separate_pull_cgroup = ""
	I0410 22:16:27.313583   40336 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0410 22:16:27.313593   40336 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0410 22:16:27.313603   40336 command_runner.go:130] > # will be added.
	I0410 22:16:27.313610   40336 command_runner.go:130] > # default_capabilities = [
	I0410 22:16:27.313617   40336 command_runner.go:130] > # 	"CHOWN",
	I0410 22:16:27.313623   40336 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0410 22:16:27.313630   40336 command_runner.go:130] > # 	"FSETID",
	I0410 22:16:27.313638   40336 command_runner.go:130] > # 	"FOWNER",
	I0410 22:16:27.313643   40336 command_runner.go:130] > # 	"SETGID",
	I0410 22:16:27.313652   40336 command_runner.go:130] > # 	"SETUID",
	I0410 22:16:27.313658   40336 command_runner.go:130] > # 	"SETPCAP",
	I0410 22:16:27.313668   40336 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0410 22:16:27.313678   40336 command_runner.go:130] > # 	"KILL",
	I0410 22:16:27.313682   40336 command_runner.go:130] > # ]
	I0410 22:16:27.313696   40336 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0410 22:16:27.313711   40336 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0410 22:16:27.313720   40336 command_runner.go:130] > # add_inheritable_capabilities = false
	I0410 22:16:27.313729   40336 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0410 22:16:27.313741   40336 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0410 22:16:27.313748   40336 command_runner.go:130] > default_sysctls = [
	I0410 22:16:27.313763   40336 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0410 22:16:27.313769   40336 command_runner.go:130] > ]
	I0410 22:16:27.313777   40336 command_runner.go:130] > # List of devices on the host that a
	I0410 22:16:27.313791   40336 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0410 22:16:27.313800   40336 command_runner.go:130] > # allowed_devices = [
	I0410 22:16:27.313807   40336 command_runner.go:130] > # 	"/dev/fuse",
	I0410 22:16:27.313816   40336 command_runner.go:130] > # ]
	I0410 22:16:27.313824   40336 command_runner.go:130] > # List of additional devices. specified as
	I0410 22:16:27.313836   40336 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0410 22:16:27.313848   40336 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0410 22:16:27.313858   40336 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0410 22:16:27.313868   40336 command_runner.go:130] > # additional_devices = [
	I0410 22:16:27.313873   40336 command_runner.go:130] > # ]
	I0410 22:16:27.313884   40336 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0410 22:16:27.313894   40336 command_runner.go:130] > # cdi_spec_dirs = [
	I0410 22:16:27.313904   40336 command_runner.go:130] > # 	"/etc/cdi",
	I0410 22:16:27.313915   40336 command_runner.go:130] > # 	"/var/run/cdi",
	I0410 22:16:27.313921   40336 command_runner.go:130] > # ]
	I0410 22:16:27.313934   40336 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0410 22:16:27.313947   40336 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0410 22:16:27.313957   40336 command_runner.go:130] > # Defaults to false.
	I0410 22:16:27.313969   40336 command_runner.go:130] > # device_ownership_from_security_context = false
	I0410 22:16:27.313981   40336 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0410 22:16:27.313995   40336 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0410 22:16:27.314005   40336 command_runner.go:130] > # hooks_dir = [
	I0410 22:16:27.314012   40336 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0410 22:16:27.314022   40336 command_runner.go:130] > # ]
	I0410 22:16:27.314032   40336 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0410 22:16:27.314045   40336 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0410 22:16:27.314057   40336 command_runner.go:130] > # its default mounts from the following two files:
	I0410 22:16:27.314062   40336 command_runner.go:130] > #
	I0410 22:16:27.314074   40336 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0410 22:16:27.314085   40336 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0410 22:16:27.314098   40336 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0410 22:16:27.314105   40336 command_runner.go:130] > #
	I0410 22:16:27.314116   40336 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0410 22:16:27.314130   40336 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0410 22:16:27.314143   40336 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0410 22:16:27.314155   40336 command_runner.go:130] > #      only add mounts it finds in this file.
	I0410 22:16:27.314171   40336 command_runner.go:130] > #
	I0410 22:16:27.314179   40336 command_runner.go:130] > # default_mounts_file = ""
	I0410 22:16:27.314191   40336 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0410 22:16:27.314205   40336 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0410 22:16:27.314215   40336 command_runner.go:130] > pids_limit = 1024
	I0410 22:16:27.314222   40336 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0410 22:16:27.314228   40336 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0410 22:16:27.314233   40336 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0410 22:16:27.314241   40336 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0410 22:16:27.314245   40336 command_runner.go:130] > # log_size_max = -1
	I0410 22:16:27.314256   40336 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0410 22:16:27.314264   40336 command_runner.go:130] > # log_to_journald = false
	I0410 22:16:27.314270   40336 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0410 22:16:27.314277   40336 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0410 22:16:27.314282   40336 command_runner.go:130] > # Path to directory for container attach sockets.
	I0410 22:16:27.314289   40336 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0410 22:16:27.314296   40336 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0410 22:16:27.314300   40336 command_runner.go:130] > # bind_mount_prefix = ""
	I0410 22:16:27.314309   40336 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0410 22:16:27.314314   40336 command_runner.go:130] > # read_only = false
	I0410 22:16:27.314320   40336 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0410 22:16:27.314329   40336 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0410 22:16:27.314333   40336 command_runner.go:130] > # live configuration reload.
	I0410 22:16:27.314337   40336 command_runner.go:130] > # log_level = "info"
	I0410 22:16:27.314344   40336 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0410 22:16:27.314349   40336 command_runner.go:130] > # This option supports live configuration reload.
	I0410 22:16:27.314355   40336 command_runner.go:130] > # log_filter = ""
	I0410 22:16:27.314361   40336 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0410 22:16:27.314369   40336 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0410 22:16:27.314373   40336 command_runner.go:130] > # separated by comma.
	I0410 22:16:27.314383   40336 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0410 22:16:27.314389   40336 command_runner.go:130] > # uid_mappings = ""
	I0410 22:16:27.314394   40336 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0410 22:16:27.314403   40336 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0410 22:16:27.314409   40336 command_runner.go:130] > # separated by comma.
	I0410 22:16:27.314416   40336 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0410 22:16:27.314424   40336 command_runner.go:130] > # gid_mappings = ""
	I0410 22:16:27.314436   40336 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0410 22:16:27.314449   40336 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0410 22:16:27.314465   40336 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0410 22:16:27.314475   40336 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0410 22:16:27.314481   40336 command_runner.go:130] > # minimum_mappable_uid = -1
	I0410 22:16:27.314491   40336 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0410 22:16:27.314504   40336 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0410 22:16:27.314518   40336 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0410 22:16:27.314531   40336 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0410 22:16:27.314543   40336 command_runner.go:130] > # minimum_mappable_gid = -1
	I0410 22:16:27.314550   40336 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0410 22:16:27.314556   40336 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0410 22:16:27.314566   40336 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0410 22:16:27.314576   40336 command_runner.go:130] > # ctr_stop_timeout = 30
	I0410 22:16:27.314588   40336 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0410 22:16:27.314601   40336 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0410 22:16:27.314613   40336 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0410 22:16:27.314620   40336 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0410 22:16:27.314630   40336 command_runner.go:130] > drop_infra_ctr = false
	I0410 22:16:27.314643   40336 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0410 22:16:27.314655   40336 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0410 22:16:27.314667   40336 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0410 22:16:27.314676   40336 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0410 22:16:27.314687   40336 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0410 22:16:27.314700   40336 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0410 22:16:27.314711   40336 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0410 22:16:27.314723   40336 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0410 22:16:27.314732   40336 command_runner.go:130] > # shared_cpuset = ""
	I0410 22:16:27.314738   40336 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0410 22:16:27.314743   40336 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0410 22:16:27.314747   40336 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0410 22:16:27.314754   40336 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0410 22:16:27.314758   40336 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0410 22:16:27.314763   40336 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0410 22:16:27.314769   40336 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0410 22:16:27.314775   40336 command_runner.go:130] > # enable_criu_support = false
	I0410 22:16:27.314781   40336 command_runner.go:130] > # Enable/disable the generation of the container,
	I0410 22:16:27.314791   40336 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0410 22:16:27.314798   40336 command_runner.go:130] > # enable_pod_events = false
	I0410 22:16:27.314804   40336 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0410 22:16:27.314812   40336 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0410 22:16:27.314819   40336 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0410 22:16:27.314823   40336 command_runner.go:130] > # default_runtime = "runc"
	I0410 22:16:27.314830   40336 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0410 22:16:27.314844   40336 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0410 22:16:27.314876   40336 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0410 22:16:27.314893   40336 command_runner.go:130] > # creation as a file is not desired either.
	I0410 22:16:27.314908   40336 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0410 22:16:27.314920   40336 command_runner.go:130] > # the hostname is being managed dynamically.
	I0410 22:16:27.314929   40336 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0410 22:16:27.314937   40336 command_runner.go:130] > # ]
	I0410 22:16:27.314947   40336 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0410 22:16:27.314960   40336 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0410 22:16:27.314969   40336 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0410 22:16:27.314974   40336 command_runner.go:130] > # Each entry in the table should follow the format:
	I0410 22:16:27.314979   40336 command_runner.go:130] > #
	I0410 22:16:27.314984   40336 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0410 22:16:27.314991   40336 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0410 22:16:27.315031   40336 command_runner.go:130] > # runtime_type = "oci"
	I0410 22:16:27.315038   40336 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0410 22:16:27.315046   40336 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0410 22:16:27.315056   40336 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0410 22:16:27.315065   40336 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0410 22:16:27.315074   40336 command_runner.go:130] > # monitor_env = []
	I0410 22:16:27.315083   40336 command_runner.go:130] > # privileged_without_host_devices = false
	I0410 22:16:27.315092   40336 command_runner.go:130] > # allowed_annotations = []
	I0410 22:16:27.315100   40336 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0410 22:16:27.315110   40336 command_runner.go:130] > # Where:
	I0410 22:16:27.315119   40336 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0410 22:16:27.315133   40336 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0410 22:16:27.315146   40336 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0410 22:16:27.315158   40336 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0410 22:16:27.315168   40336 command_runner.go:130] > #   in $PATH.
	I0410 22:16:27.315178   40336 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0410 22:16:27.315190   40336 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0410 22:16:27.315206   40336 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0410 22:16:27.315215   40336 command_runner.go:130] > #   state.
	I0410 22:16:27.315226   40336 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0410 22:16:27.315238   40336 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0410 22:16:27.315248   40336 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0410 22:16:27.315256   40336 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0410 22:16:27.315267   40336 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0410 22:16:27.315275   40336 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0410 22:16:27.315281   40336 command_runner.go:130] > #   The currently recognized values are:
	I0410 22:16:27.315294   40336 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0410 22:16:27.315309   40336 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0410 22:16:27.315322   40336 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0410 22:16:27.315335   40336 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0410 22:16:27.315349   40336 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0410 22:16:27.315364   40336 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0410 22:16:27.315377   40336 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0410 22:16:27.315391   40336 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0410 22:16:27.315405   40336 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0410 22:16:27.315418   40336 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0410 22:16:27.315429   40336 command_runner.go:130] > #   deprecated option "conmon".
	I0410 22:16:27.315442   40336 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0410 22:16:27.315452   40336 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0410 22:16:27.315465   40336 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0410 22:16:27.315476   40336 command_runner.go:130] > #   should be moved to the container's cgroup
	I0410 22:16:27.315491   40336 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0410 22:16:27.315501   40336 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0410 22:16:27.315514   40336 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0410 22:16:27.315526   40336 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0410 22:16:27.315535   40336 command_runner.go:130] > #
	I0410 22:16:27.315542   40336 command_runner.go:130] > # Using the seccomp notifier feature:
	I0410 22:16:27.315550   40336 command_runner.go:130] > #
	I0410 22:16:27.315561   40336 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0410 22:16:27.315575   40336 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0410 22:16:27.315582   40336 command_runner.go:130] > #
	I0410 22:16:27.315596   40336 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0410 22:16:27.315609   40336 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0410 22:16:27.315617   40336 command_runner.go:130] > #
	I0410 22:16:27.315627   40336 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0410 22:16:27.315634   40336 command_runner.go:130] > # feature.
	I0410 22:16:27.315643   40336 command_runner.go:130] > #
	I0410 22:16:27.315654   40336 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0410 22:16:27.315668   40336 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0410 22:16:27.315686   40336 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0410 22:16:27.315700   40336 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0410 22:16:27.315710   40336 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0410 22:16:27.315716   40336 command_runner.go:130] > #
	I0410 22:16:27.315726   40336 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0410 22:16:27.315739   40336 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0410 22:16:27.315748   40336 command_runner.go:130] > #
	I0410 22:16:27.315757   40336 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0410 22:16:27.315769   40336 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0410 22:16:27.315777   40336 command_runner.go:130] > #
	I0410 22:16:27.315786   40336 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0410 22:16:27.315797   40336 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0410 22:16:27.315801   40336 command_runner.go:130] > # limitation.
	I0410 22:16:27.315810   40336 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0410 22:16:27.315821   40336 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0410 22:16:27.315831   40336 command_runner.go:130] > runtime_type = "oci"
	I0410 22:16:27.315841   40336 command_runner.go:130] > runtime_root = "/run/runc"
	I0410 22:16:27.315850   40336 command_runner.go:130] > runtime_config_path = ""
	I0410 22:16:27.315862   40336 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0410 22:16:27.315871   40336 command_runner.go:130] > monitor_cgroup = "pod"
	I0410 22:16:27.315880   40336 command_runner.go:130] > monitor_exec_cgroup = ""
	I0410 22:16:27.315886   40336 command_runner.go:130] > monitor_env = [
	I0410 22:16:27.315894   40336 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0410 22:16:27.315902   40336 command_runner.go:130] > ]
	I0410 22:16:27.315911   40336 command_runner.go:130] > privileged_without_host_devices = false
	I0410 22:16:27.315925   40336 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0410 22:16:27.315937   40336 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0410 22:16:27.315950   40336 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0410 22:16:27.315964   40336 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0410 22:16:27.315974   40336 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0410 22:16:27.315986   40336 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0410 22:16:27.316007   40336 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0410 22:16:27.316023   40336 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0410 22:16:27.316035   40336 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0410 22:16:27.316050   40336 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0410 22:16:27.316056   40336 command_runner.go:130] > # Example:
	I0410 22:16:27.316065   40336 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0410 22:16:27.316078   40336 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0410 22:16:27.316090   40336 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0410 22:16:27.316101   40336 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0410 22:16:27.316109   40336 command_runner.go:130] > # cpuset = 0
	I0410 22:16:27.316118   40336 command_runner.go:130] > # cpushares = "0-1"
	I0410 22:16:27.316125   40336 command_runner.go:130] > # Where:
	I0410 22:16:27.316135   40336 command_runner.go:130] > # The workload name is workload-type.
	I0410 22:16:27.316145   40336 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0410 22:16:27.316155   40336 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0410 22:16:27.316168   40336 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0410 22:16:27.316184   40336 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0410 22:16:27.316196   40336 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0410 22:16:27.316207   40336 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0410 22:16:27.316223   40336 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0410 22:16:27.316230   40336 command_runner.go:130] > # Default value is set to true
	I0410 22:16:27.316235   40336 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0410 22:16:27.316248   40336 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0410 22:16:27.316260   40336 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0410 22:16:27.316271   40336 command_runner.go:130] > # Default value is set to 'false'
	I0410 22:16:27.316281   40336 command_runner.go:130] > # disable_hostport_mapping = false
	I0410 22:16:27.316294   40336 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0410 22:16:27.316302   40336 command_runner.go:130] > #
	I0410 22:16:27.316313   40336 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0410 22:16:27.316323   40336 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0410 22:16:27.316336   40336 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0410 22:16:27.316349   40336 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0410 22:16:27.316358   40336 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0410 22:16:27.316364   40336 command_runner.go:130] > [crio.image]
	I0410 22:16:27.316373   40336 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0410 22:16:27.316379   40336 command_runner.go:130] > # default_transport = "docker://"
	I0410 22:16:27.316391   40336 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0410 22:16:27.316412   40336 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0410 22:16:27.316420   40336 command_runner.go:130] > # global_auth_file = ""
	I0410 22:16:27.316429   40336 command_runner.go:130] > # The image used to instantiate infra containers.
	I0410 22:16:27.316437   40336 command_runner.go:130] > # This option supports live configuration reload.
	I0410 22:16:27.316450   40336 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0410 22:16:27.316467   40336 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0410 22:16:27.316480   40336 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0410 22:16:27.316491   40336 command_runner.go:130] > # This option supports live configuration reload.
	I0410 22:16:27.316502   40336 command_runner.go:130] > # pause_image_auth_file = ""
	I0410 22:16:27.316514   40336 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0410 22:16:27.316527   40336 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0410 22:16:27.316540   40336 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0410 22:16:27.316553   40336 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0410 22:16:27.316561   40336 command_runner.go:130] > # pause_command = "/pause"
	I0410 22:16:27.316567   40336 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0410 22:16:27.316579   40336 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0410 22:16:27.316593   40336 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0410 22:16:27.316606   40336 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0410 22:16:27.316618   40336 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0410 22:16:27.316631   40336 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0410 22:16:27.316641   40336 command_runner.go:130] > # pinned_images = [
	I0410 22:16:27.316647   40336 command_runner.go:130] > # ]
	I0410 22:16:27.316653   40336 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0410 22:16:27.316667   40336 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0410 22:16:27.316681   40336 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0410 22:16:27.316694   40336 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0410 22:16:27.316706   40336 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0410 22:16:27.316715   40336 command_runner.go:130] > # signature_policy = ""
	I0410 22:16:27.316726   40336 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0410 22:16:27.316736   40336 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0410 22:16:27.316746   40336 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0410 22:16:27.316760   40336 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0410 22:16:27.316773   40336 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0410 22:16:27.316783   40336 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0410 22:16:27.316799   40336 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0410 22:16:27.316812   40336 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0410 22:16:27.316819   40336 command_runner.go:130] > # changing them here.
	I0410 22:16:27.316823   40336 command_runner.go:130] > # insecure_registries = [
	I0410 22:16:27.316831   40336 command_runner.go:130] > # ]
	I0410 22:16:27.316841   40336 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0410 22:16:27.316854   40336 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0410 22:16:27.316864   40336 command_runner.go:130] > # image_volumes = "mkdir"
	I0410 22:16:27.316876   40336 command_runner.go:130] > # Temporary directory to use for storing big files
	I0410 22:16:27.316886   40336 command_runner.go:130] > # big_files_temporary_dir = ""
	I0410 22:16:27.316899   40336 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0410 22:16:27.316906   40336 command_runner.go:130] > # CNI plugins.
	I0410 22:16:27.316910   40336 command_runner.go:130] > [crio.network]
	I0410 22:16:27.316917   40336 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0410 22:16:27.316923   40336 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0410 22:16:27.316928   40336 command_runner.go:130] > # cni_default_network = ""
	I0410 22:16:27.316937   40336 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0410 22:16:27.316948   40336 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0410 22:16:27.316961   40336 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0410 22:16:27.316970   40336 command_runner.go:130] > # plugin_dirs = [
	I0410 22:16:27.316979   40336 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0410 22:16:27.316987   40336 command_runner.go:130] > # ]
	I0410 22:16:27.316999   40336 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0410 22:16:27.317007   40336 command_runner.go:130] > [crio.metrics]
	I0410 22:16:27.317018   40336 command_runner.go:130] > # Globally enable or disable metrics support.
	I0410 22:16:27.317024   40336 command_runner.go:130] > enable_metrics = true
	I0410 22:16:27.317034   40336 command_runner.go:130] > # Specify enabled metrics collectors.
	I0410 22:16:27.317046   40336 command_runner.go:130] > # Per default all metrics are enabled.
	I0410 22:16:27.317058   40336 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0410 22:16:27.317071   40336 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0410 22:16:27.317084   40336 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0410 22:16:27.317100   40336 command_runner.go:130] > # metrics_collectors = [
	I0410 22:16:27.317110   40336 command_runner.go:130] > # 	"operations",
	I0410 22:16:27.317121   40336 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0410 22:16:27.317132   40336 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0410 22:16:27.317141   40336 command_runner.go:130] > # 	"operations_errors",
	I0410 22:16:27.317152   40336 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0410 22:16:27.317162   40336 command_runner.go:130] > # 	"image_pulls_by_name",
	I0410 22:16:27.317172   40336 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0410 22:16:27.317178   40336 command_runner.go:130] > # 	"image_pulls_failures",
	I0410 22:16:27.317183   40336 command_runner.go:130] > # 	"image_pulls_successes",
	I0410 22:16:27.317190   40336 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0410 22:16:27.317197   40336 command_runner.go:130] > # 	"image_layer_reuse",
	I0410 22:16:27.317204   40336 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0410 22:16:27.317210   40336 command_runner.go:130] > # 	"containers_oom_total",
	I0410 22:16:27.317217   40336 command_runner.go:130] > # 	"containers_oom",
	I0410 22:16:27.317221   40336 command_runner.go:130] > # 	"processes_defunct",
	I0410 22:16:27.317228   40336 command_runner.go:130] > # 	"operations_total",
	I0410 22:16:27.317232   40336 command_runner.go:130] > # 	"operations_latency_seconds",
	I0410 22:16:27.317239   40336 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0410 22:16:27.317243   40336 command_runner.go:130] > # 	"operations_errors_total",
	I0410 22:16:27.317249   40336 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0410 22:16:27.317254   40336 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0410 22:16:27.317260   40336 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0410 22:16:27.317265   40336 command_runner.go:130] > # 	"image_pulls_success_total",
	I0410 22:16:27.317271   40336 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0410 22:16:27.317276   40336 command_runner.go:130] > # 	"containers_oom_count_total",
	I0410 22:16:27.317283   40336 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0410 22:16:27.317287   40336 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0410 22:16:27.317295   40336 command_runner.go:130] > # ]
	I0410 22:16:27.317306   40336 command_runner.go:130] > # The port on which the metrics server will listen.
	I0410 22:16:27.317316   40336 command_runner.go:130] > # metrics_port = 9090
	I0410 22:16:27.317326   40336 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0410 22:16:27.317336   40336 command_runner.go:130] > # metrics_socket = ""
	I0410 22:16:27.317347   40336 command_runner.go:130] > # The certificate for the secure metrics server.
	I0410 22:16:27.317362   40336 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0410 22:16:27.317375   40336 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0410 22:16:27.317385   40336 command_runner.go:130] > # certificate on any modification event.
	I0410 22:16:27.317392   40336 command_runner.go:130] > # metrics_cert = ""
	I0410 22:16:27.317397   40336 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0410 22:16:27.317403   40336 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0410 22:16:27.317407   40336 command_runner.go:130] > # metrics_key = ""
	I0410 22:16:27.317415   40336 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0410 22:16:27.317419   40336 command_runner.go:130] > [crio.tracing]
	I0410 22:16:27.317427   40336 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0410 22:16:27.317433   40336 command_runner.go:130] > # enable_tracing = false
	I0410 22:16:27.317439   40336 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0410 22:16:27.317445   40336 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0410 22:16:27.317452   40336 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0410 22:16:27.317463   40336 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0410 22:16:27.317468   40336 command_runner.go:130] > # CRI-O NRI configuration.
	I0410 22:16:27.317474   40336 command_runner.go:130] > [crio.nri]
	I0410 22:16:27.317478   40336 command_runner.go:130] > # Globally enable or disable NRI.
	I0410 22:16:27.317485   40336 command_runner.go:130] > # enable_nri = false
	I0410 22:16:27.317489   40336 command_runner.go:130] > # NRI socket to listen on.
	I0410 22:16:27.317496   40336 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0410 22:16:27.317500   40336 command_runner.go:130] > # NRI plugin directory to use.
	I0410 22:16:27.317507   40336 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0410 22:16:27.317515   40336 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0410 22:16:27.317522   40336 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0410 22:16:27.317528   40336 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0410 22:16:27.317534   40336 command_runner.go:130] > # nri_disable_connections = false
	I0410 22:16:27.317539   40336 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0410 22:16:27.317546   40336 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0410 22:16:27.317551   40336 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0410 22:16:27.317558   40336 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0410 22:16:27.317564   40336 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0410 22:16:27.317570   40336 command_runner.go:130] > [crio.stats]
	I0410 22:16:27.317576   40336 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0410 22:16:27.317583   40336 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0410 22:16:27.317590   40336 command_runner.go:130] > # stats_collection_period = 0
	I0410 22:16:27.317615   40336 command_runner.go:130] ! time="2024-04-10 22:16:27.279686821Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0410 22:16:27.317629   40336 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0410 22:16:27.317713   40336 cni.go:84] Creating CNI manager for ""
	I0410 22:16:27.317723   40336 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0410 22:16:27.317730   40336 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 22:16:27.317758   40336 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.94 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-824789 NodeName:multinode-824789 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.94"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.94 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0410 22:16:27.317871   40336 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.94
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-824789"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.94
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.94"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 22:16:27.317929   40336 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0410 22:16:27.331262   40336 command_runner.go:130] > kubeadm
	I0410 22:16:27.331278   40336 command_runner.go:130] > kubectl
	I0410 22:16:27.331282   40336 command_runner.go:130] > kubelet
	I0410 22:16:27.331625   40336 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 22:16:27.331671   40336 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 22:16:27.343757   40336 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0410 22:16:27.362930   40336 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0410 22:16:27.385496   40336 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0410 22:16:27.404743   40336 ssh_runner.go:195] Run: grep 192.168.39.94	control-plane.minikube.internal$ /etc/hosts
	I0410 22:16:27.409081   40336 command_runner.go:130] > 192.168.39.94	control-plane.minikube.internal
	I0410 22:16:27.409143   40336 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:16:27.574783   40336 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:16:27.591631   40336 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/multinode-824789 for IP: 192.168.39.94
	I0410 22:16:27.591654   40336 certs.go:194] generating shared ca certs ...
	I0410 22:16:27.591672   40336 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:16:27.591831   40336 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 22:16:27.591883   40336 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 22:16:27.591897   40336 certs.go:256] generating profile certs ...
	I0410 22:16:27.591977   40336 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/multinode-824789/client.key
	I0410 22:16:27.592057   40336 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/multinode-824789/apiserver.key.7681d9ce
	I0410 22:16:27.592110   40336 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/multinode-824789/proxy-client.key
	I0410 22:16:27.592125   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0410 22:16:27.592152   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0410 22:16:27.592173   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0410 22:16:27.592191   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0410 22:16:27.592210   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/multinode-824789/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0410 22:16:27.592231   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/multinode-824789/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0410 22:16:27.592250   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/multinode-824789/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0410 22:16:27.592268   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/multinode-824789/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0410 22:16:27.592339   40336 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 22:16:27.592378   40336 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 22:16:27.592392   40336 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 22:16:27.592447   40336 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 22:16:27.592481   40336 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 22:16:27.592512   40336 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 22:16:27.592565   40336 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:16:27.592606   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:16:27.592625   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem -> /usr/share/ca-certificates/13001.pem
	I0410 22:16:27.592644   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> /usr/share/ca-certificates/130012.pem
	I0410 22:16:27.593191   40336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 22:16:27.619566   40336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 22:16:27.644253   40336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 22:16:27.668991   40336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 22:16:27.693015   40336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/multinode-824789/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0410 22:16:27.717778   40336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/multinode-824789/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0410 22:16:27.742419   40336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/multinode-824789/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 22:16:27.768244   40336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/multinode-824789/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0410 22:16:27.793137   40336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 22:16:27.821674   40336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 22:16:27.846486   40336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 22:16:27.871860   40336 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 22:16:27.889082   40336 ssh_runner.go:195] Run: openssl version
	I0410 22:16:27.894926   40336 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0410 22:16:27.895099   40336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 22:16:27.906910   40336 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:16:27.911293   40336 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:16:27.911489   40336 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:16:27.911527   40336 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:16:27.917276   40336 command_runner.go:130] > b5213941
	I0410 22:16:27.917330   40336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 22:16:27.926993   40336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 22:16:27.938214   40336 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 22:16:27.942742   40336 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:16:27.942858   40336 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:16:27.942911   40336 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 22:16:27.949287   40336 command_runner.go:130] > 51391683
	I0410 22:16:27.949344   40336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 22:16:27.959079   40336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 22:16:27.970339   40336 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 22:16:27.975028   40336 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:16:27.975070   40336 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:16:27.975118   40336 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 22:16:27.980873   40336 command_runner.go:130] > 3ec20f2e
	I0410 22:16:27.980987   40336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 22:16:27.990284   40336 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:16:27.994873   40336 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:16:27.994900   40336 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0410 22:16:27.994909   40336 command_runner.go:130] > Device: 253,1	Inode: 6292486     Links: 1
	I0410 22:16:27.994918   40336 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0410 22:16:27.994932   40336 command_runner.go:130] > Access: 2024-04-10 22:10:14.865982874 +0000
	I0410 22:16:27.994943   40336 command_runner.go:130] > Modify: 2024-04-10 22:10:14.865982874 +0000
	I0410 22:16:27.994950   40336 command_runner.go:130] > Change: 2024-04-10 22:10:14.865982874 +0000
	I0410 22:16:27.994967   40336 command_runner.go:130] >  Birth: 2024-04-10 22:10:14.865982874 +0000
	I0410 22:16:27.995009   40336 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0410 22:16:28.000648   40336 command_runner.go:130] > Certificate will not expire
	I0410 22:16:28.000706   40336 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0410 22:16:28.006701   40336 command_runner.go:130] > Certificate will not expire
	I0410 22:16:28.006850   40336 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0410 22:16:28.012635   40336 command_runner.go:130] > Certificate will not expire
	I0410 22:16:28.012703   40336 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0410 22:16:28.018171   40336 command_runner.go:130] > Certificate will not expire
	I0410 22:16:28.018643   40336 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0410 22:16:28.024104   40336 command_runner.go:130] > Certificate will not expire
	I0410 22:16:28.024245   40336 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0410 22:16:28.029660   40336 command_runner.go:130] > Certificate will not expire
	I0410 22:16:28.029856   40336 kubeadm.go:391] StartCluster: {Name:multinode-824789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:multinode-824789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.85 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.224 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:16:28.029997   40336 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 22:16:28.030044   40336 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:16:28.069743   40336 command_runner.go:130] > 559d8ae61200e3ba5d2a71f3c2058d4f2b1af0bedb839a2a8271d366e75a24fa
	I0410 22:16:28.069775   40336 command_runner.go:130] > c7dc29ebd6ee440c1b6ce074966c0ed7caf78253e19d0f5232e7950c151fc4ce
	I0410 22:16:28.069784   40336 command_runner.go:130] > 6b912245ff19931ce5c9e4bc61ff0b004c72ae768f856684259bfb4db2fb768b
	I0410 22:16:28.069794   40336 command_runner.go:130] > 6d0d4dd9273967b484f51f92f7e12ada0b0ca391d33518f7cdc8a9ded534e23c
	I0410 22:16:28.069802   40336 command_runner.go:130] > 2541b56a95637bf11f9ee6531105e8bdfd19cae8b2b4e1c40a718f501ba9904a
	I0410 22:16:28.069810   40336 command_runner.go:130] > cbf4abb7ad40ea21bcf22323179f9b10801504264e307d41d755d3e5f8b8e5e9
	I0410 22:16:28.069823   40336 command_runner.go:130] > 33e5663b850f38138c30aad20177833de666a6e18c80807aeffee6fa023c2660
	I0410 22:16:28.069833   40336 command_runner.go:130] > 8486ace19c17159f5709414d67c7af1364c7de18561079ec6a14f197eb911515
	I0410 22:16:28.070023   40336 cri.go:89] found id: "559d8ae61200e3ba5d2a71f3c2058d4f2b1af0bedb839a2a8271d366e75a24fa"
	I0410 22:16:28.070038   40336 cri.go:89] found id: "c7dc29ebd6ee440c1b6ce074966c0ed7caf78253e19d0f5232e7950c151fc4ce"
	I0410 22:16:28.070044   40336 cri.go:89] found id: "6b912245ff19931ce5c9e4bc61ff0b004c72ae768f856684259bfb4db2fb768b"
	I0410 22:16:28.070048   40336 cri.go:89] found id: "6d0d4dd9273967b484f51f92f7e12ada0b0ca391d33518f7cdc8a9ded534e23c"
	I0410 22:16:28.070053   40336 cri.go:89] found id: "2541b56a95637bf11f9ee6531105e8bdfd19cae8b2b4e1c40a718f501ba9904a"
	I0410 22:16:28.070057   40336 cri.go:89] found id: "cbf4abb7ad40ea21bcf22323179f9b10801504264e307d41d755d3e5f8b8e5e9"
	I0410 22:16:28.070062   40336 cri.go:89] found id: "33e5663b850f38138c30aad20177833de666a6e18c80807aeffee6fa023c2660"
	I0410 22:16:28.070065   40336 cri.go:89] found id: "8486ace19c17159f5709414d67c7af1364c7de18561079ec6a14f197eb911515"
	I0410 22:16:28.070069   40336 cri.go:89] found id: ""
	I0410 22:16:28.070120   40336 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 10 22:17:56 multinode-824789 crio[2854]: time="2024-04-10 22:17:56.182258125Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712787476182233314,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=59690ab8-f567-4a91-929d-13f2cc8be9c8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:17:56 multinode-824789 crio[2854]: time="2024-04-10 22:17:56.182803738Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=035f4635-a4c7-48fc-b169-c168918be4d6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:17:56 multinode-824789 crio[2854]: time="2024-04-10 22:17:56.182882614Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=035f4635-a4c7-48fc-b169-c168918be4d6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:17:56 multinode-824789 crio[2854]: time="2024-04-10 22:17:56.183332656Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b69ab1eeb616d179fe5a8784376b2875967c0592e582f61eea38471c80e3e84,PodSandboxId:ab19cca9130746bdf30fe7833dd218299d58facbdcb869c0bfd99da0473bb785,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712787427888182402,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-k2ds9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f84d3580-83d9-497d-bc27-9d1112849093,},Annotations:map[string]string{io.kubernetes.container.hash: c5c43c7e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1452163a5519628d53512c6cdfa710d4393fba40d50434f11f2e79a552f23512,PodSandboxId:c50458e2b81378eb737e89c103b2eb1f14cca493f4e6be985045ad1e173d463f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712787394376623525,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wtnkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7169290a-557c-4861-8ecd-e2a0b2c0b290,},Annotations:map[string]string{io.kubernetes.container.hash: 18c7ea1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95fa99a9b394acfb70cf2d3bc625515f04ac5bbaf1e83ce8ed837895f8ed2711,PodSandboxId:8fc64b964d3c6debfb4197aab2b5454bcbbe22981c74b15086aa8bc000bec36e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712787394236845048,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-q2q8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e335e4d5-f65f-4722-b2c1-60e22cd08383,},Annotations:map[string]string{io.kubernetes.container.hash: 4b782acf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9766617e949fbcca21ed32d98fe5425705562bbf2b80ced264099a5262049093,PodSandboxId:d0bb5e97176be6754b1a25e9382b9af152484d18499d23f2c607e90826f1faf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712787394149865567,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e571cab5-3579-4616-90f8-a9c465e70ace,},An
notations:map[string]string{io.kubernetes.container.hash: c59df8f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153b13801dcbfa0b0df8df6c049f8c0b02d3726f6fca41e1d3375d394d55c529,PodSandboxId:a4727b3c277b2644be6addc77a5f5ee7f174daf03a364aed4120671cf62f5e3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712787394141758693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jczhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bc151d6-2081-4f28-80d9-f5bbc795697e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 47cf10f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:539e39c1eb16e404b9f016c66bfa0a50882f7a3f450a45b5430e466e766c4d1a,PodSandboxId:7657c2ef27ad1e2c2c39ceadc957c9aa5b99c3b4931db10099ff33156b8b02d4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712787390419753041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b6548b0f76d3607d58faa9b3e608948,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f09dfe1ad20f92ced33fc247582ae9805c5208dcfdbbb61996b36c12d765d0f9,PodSandboxId:2629bdf637ddfa1fcc4a0230b1b72bc7b8f3ac51064234c275de75f54c098810,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712787390369164206,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800bfb120fc35f1c411b49e7bd24fc4,},Annotations:map[string]string{io.kub
ernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9024d499796903211265b58e900d4530ff4d8f95c482563d1fc88b6a568e3909,PodSandboxId:cd1042b2912ee16daf10d32a2b4062812d624599cc0367c7d719e0a669e27a52,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712787390378869061,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12b4e0e3d4dfd3581ea04dc539f54186,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7f29e9a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e61f191e2e49cdf6315e1e237ffb6d7db9738e9a42cf5ba7ee189377861f57,PodSandboxId:cb438d7c83c336ca9de1cca90cabe7562df21c8e04b46646ba3dc228e6c75c27,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712787390348573076,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b2c1d24c176a5f0fdc05076676f83e4,},Annotations:map[string]string{io.kubernetes.container.hash: 54acc59a,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7fb0eb1db503999354fe6d2250ddc1eb8b4a81807d10d1f2074ee34c0f60b7,PodSandboxId:d4bf1d7c408122c41e63d9e84a828c448900e752841e54344d17667ed7cbc693,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712787086478490953,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-k2ds9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f84d3580-83d9-497d-bc27-9d1112849093,},Annotations:map[string]string{io.kubernetes.container.hash: c5c43c7e,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:559d8ae61200e3ba5d2a71f3c2058d4f2b1af0bedb839a2a8271d366e75a24fa,PodSandboxId:97fdf93300610640a55f8d9f26679f11ab52b106d47a3c55393773a007cbdbce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712787040069753134,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e571cab5-3579-4616-90f8-a9c465e70ace,},Annotations:map[string]string{io.kubernetes.container.hash: c59df8f7,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7dc29ebd6ee440c1b6ce074966c0ed7caf78253e19d0f5232e7950c151fc4ce,PodSandboxId:8bbc7f26b3f24c6861865c8af008cf9dd186259b174c0903892d9620a11fa194,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712787039290453019,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-q2q8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e335e4d5-f65f-4722-b2c1-60e22cd08383,},Annotations:map[string]string{io.kubernetes.container.hash: 4b782acf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b912245ff19931ce5c9e4bc61ff0b004c72ae768f856684259bfb4db2fb768b,PodSandboxId:9a40c2487b0b89099cb2ad8a18821f8a5444c70f640b534d814587812abcbc1e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712787037548947345,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wtnkq,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 7169290a-557c-4861-8ecd-e2a0b2c0b290,},Annotations:map[string]string{io.kubernetes.container.hash: 18c7ea1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0d4dd9273967b484f51f92f7e12ada0b0ca391d33518f7cdc8a9ded534e23c,PodSandboxId:a4899072a08fffc872259437814850aecb1970fc1a147fae53163f8bc6ae6e65,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712787037388503224,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jczhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bc151d6-2081-4f28-80d9
-f5bbc795697e,},Annotations:map[string]string{io.kubernetes.container.hash: 47cf10f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2541b56a95637bf11f9ee6531105e8bdfd19cae8b2b4e1c40a718f501ba9904a,PodSandboxId:c70ffd4456f7d99f07f71ea04094fe095d660779f4ccd28c1a07809a6301fd5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712787018157395824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800bfb
120fc35f1c411b49e7bd24fc4,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33e5663b850f38138c30aad20177833de666a6e18c80807aeffee6fa023c2660,PodSandboxId:56ccb96fb9f1e89f8fb395d97342853122bd17c5680301c02a718eeb92d6f288,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712787018142947435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b2c1d24c176a5f0fdc05076676f83e4,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 54acc59a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf4abb7ad40ea21bcf22323179f9b10801504264e307d41d755d3e5f8b8e5e9,PodSandboxId:e55cc501e3962a8f290d13b7b6163694efdf66a722479c70c66c3181349addbd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712787018144622016,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b6548b0f76d3607d58faa9b3e608948,},Annotations:map[string]string{io
.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8486ace19c17159f5709414d67c7af1364c7de18561079ec6a14f197eb911515,PodSandboxId:136bc181084da378703a45f321e6e2b0e3b00a5fa6706a8f58b77d519a60fd70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712787018134560832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12b4e0e3d4dfd3581ea04dc539f54186,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7f29e9a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=035f4635-a4c7-48fc-b169-c168918be4d6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:17:56 multinode-824789 crio[2854]: time="2024-04-10 22:17:56.234024009Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b82e1cbb-9594-4302-bd84-2fb35ef1547f name=/runtime.v1.RuntimeService/Version
	Apr 10 22:17:56 multinode-824789 crio[2854]: time="2024-04-10 22:17:56.234317622Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b82e1cbb-9594-4302-bd84-2fb35ef1547f name=/runtime.v1.RuntimeService/Version
	Apr 10 22:17:56 multinode-824789 crio[2854]: time="2024-04-10 22:17:56.235873406Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=44685a71-0d95-478a-8f13-6ec9128c7e90 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:17:56 multinode-824789 crio[2854]: time="2024-04-10 22:17:56.236539260Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712787476236511135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=44685a71-0d95-478a-8f13-6ec9128c7e90 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:17:56 multinode-824789 crio[2854]: time="2024-04-10 22:17:56.237478032Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=85be1c64-4838-4bf3-87d7-0b56ff26f5e3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:17:56 multinode-824789 crio[2854]: time="2024-04-10 22:17:56.237532740Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=85be1c64-4838-4bf3-87d7-0b56ff26f5e3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:17:56 multinode-824789 crio[2854]: time="2024-04-10 22:17:56.238535721Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b69ab1eeb616d179fe5a8784376b2875967c0592e582f61eea38471c80e3e84,PodSandboxId:ab19cca9130746bdf30fe7833dd218299d58facbdcb869c0bfd99da0473bb785,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712787427888182402,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-k2ds9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f84d3580-83d9-497d-bc27-9d1112849093,},Annotations:map[string]string{io.kubernetes.container.hash: c5c43c7e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1452163a5519628d53512c6cdfa710d4393fba40d50434f11f2e79a552f23512,PodSandboxId:c50458e2b81378eb737e89c103b2eb1f14cca493f4e6be985045ad1e173d463f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712787394376623525,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wtnkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7169290a-557c-4861-8ecd-e2a0b2c0b290,},Annotations:map[string]string{io.kubernetes.container.hash: 18c7ea1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95fa99a9b394acfb70cf2d3bc625515f04ac5bbaf1e83ce8ed837895f8ed2711,PodSandboxId:8fc64b964d3c6debfb4197aab2b5454bcbbe22981c74b15086aa8bc000bec36e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712787394236845048,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-q2q8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e335e4d5-f65f-4722-b2c1-60e22cd08383,},Annotations:map[string]string{io.kubernetes.container.hash: 4b782acf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9766617e949fbcca21ed32d98fe5425705562bbf2b80ced264099a5262049093,PodSandboxId:d0bb5e97176be6754b1a25e9382b9af152484d18499d23f2c607e90826f1faf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712787394149865567,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e571cab5-3579-4616-90f8-a9c465e70ace,},An
notations:map[string]string{io.kubernetes.container.hash: c59df8f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153b13801dcbfa0b0df8df6c049f8c0b02d3726f6fca41e1d3375d394d55c529,PodSandboxId:a4727b3c277b2644be6addc77a5f5ee7f174daf03a364aed4120671cf62f5e3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712787394141758693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jczhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bc151d6-2081-4f28-80d9-f5bbc795697e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 47cf10f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:539e39c1eb16e404b9f016c66bfa0a50882f7a3f450a45b5430e466e766c4d1a,PodSandboxId:7657c2ef27ad1e2c2c39ceadc957c9aa5b99c3b4931db10099ff33156b8b02d4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712787390419753041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b6548b0f76d3607d58faa9b3e608948,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f09dfe1ad20f92ced33fc247582ae9805c5208dcfdbbb61996b36c12d765d0f9,PodSandboxId:2629bdf637ddfa1fcc4a0230b1b72bc7b8f3ac51064234c275de75f54c098810,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712787390369164206,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800bfb120fc35f1c411b49e7bd24fc4,},Annotations:map[string]string{io.kub
ernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9024d499796903211265b58e900d4530ff4d8f95c482563d1fc88b6a568e3909,PodSandboxId:cd1042b2912ee16daf10d32a2b4062812d624599cc0367c7d719e0a669e27a52,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712787390378869061,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12b4e0e3d4dfd3581ea04dc539f54186,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7f29e9a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e61f191e2e49cdf6315e1e237ffb6d7db9738e9a42cf5ba7ee189377861f57,PodSandboxId:cb438d7c83c336ca9de1cca90cabe7562df21c8e04b46646ba3dc228e6c75c27,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712787390348573076,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b2c1d24c176a5f0fdc05076676f83e4,},Annotations:map[string]string{io.kubernetes.container.hash: 54acc59a,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7fb0eb1db503999354fe6d2250ddc1eb8b4a81807d10d1f2074ee34c0f60b7,PodSandboxId:d4bf1d7c408122c41e63d9e84a828c448900e752841e54344d17667ed7cbc693,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712787086478490953,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-k2ds9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f84d3580-83d9-497d-bc27-9d1112849093,},Annotations:map[string]string{io.kubernetes.container.hash: c5c43c7e,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:559d8ae61200e3ba5d2a71f3c2058d4f2b1af0bedb839a2a8271d366e75a24fa,PodSandboxId:97fdf93300610640a55f8d9f26679f11ab52b106d47a3c55393773a007cbdbce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712787040069753134,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e571cab5-3579-4616-90f8-a9c465e70ace,},Annotations:map[string]string{io.kubernetes.container.hash: c59df8f7,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7dc29ebd6ee440c1b6ce074966c0ed7caf78253e19d0f5232e7950c151fc4ce,PodSandboxId:8bbc7f26b3f24c6861865c8af008cf9dd186259b174c0903892d9620a11fa194,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712787039290453019,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-q2q8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e335e4d5-f65f-4722-b2c1-60e22cd08383,},Annotations:map[string]string{io.kubernetes.container.hash: 4b782acf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b912245ff19931ce5c9e4bc61ff0b004c72ae768f856684259bfb4db2fb768b,PodSandboxId:9a40c2487b0b89099cb2ad8a18821f8a5444c70f640b534d814587812abcbc1e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712787037548947345,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wtnkq,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 7169290a-557c-4861-8ecd-e2a0b2c0b290,},Annotations:map[string]string{io.kubernetes.container.hash: 18c7ea1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0d4dd9273967b484f51f92f7e12ada0b0ca391d33518f7cdc8a9ded534e23c,PodSandboxId:a4899072a08fffc872259437814850aecb1970fc1a147fae53163f8bc6ae6e65,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712787037388503224,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jczhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bc151d6-2081-4f28-80d9
-f5bbc795697e,},Annotations:map[string]string{io.kubernetes.container.hash: 47cf10f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2541b56a95637bf11f9ee6531105e8bdfd19cae8b2b4e1c40a718f501ba9904a,PodSandboxId:c70ffd4456f7d99f07f71ea04094fe095d660779f4ccd28c1a07809a6301fd5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712787018157395824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800bfb
120fc35f1c411b49e7bd24fc4,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33e5663b850f38138c30aad20177833de666a6e18c80807aeffee6fa023c2660,PodSandboxId:56ccb96fb9f1e89f8fb395d97342853122bd17c5680301c02a718eeb92d6f288,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712787018142947435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b2c1d24c176a5f0fdc05076676f83e4,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 54acc59a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf4abb7ad40ea21bcf22323179f9b10801504264e307d41d755d3e5f8b8e5e9,PodSandboxId:e55cc501e3962a8f290d13b7b6163694efdf66a722479c70c66c3181349addbd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712787018144622016,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b6548b0f76d3607d58faa9b3e608948,},Annotations:map[string]string{io
.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8486ace19c17159f5709414d67c7af1364c7de18561079ec6a14f197eb911515,PodSandboxId:136bc181084da378703a45f321e6e2b0e3b00a5fa6706a8f58b77d519a60fd70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712787018134560832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12b4e0e3d4dfd3581ea04dc539f54186,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7f29e9a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=85be1c64-4838-4bf3-87d7-0b56ff26f5e3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:17:56 multinode-824789 crio[2854]: time="2024-04-10 22:17:56.290261124Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=46702268-7924-4985-a4c2-6b91e440e336 name=/runtime.v1.RuntimeService/Version
	Apr 10 22:17:56 multinode-824789 crio[2854]: time="2024-04-10 22:17:56.290341730Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=46702268-7924-4985-a4c2-6b91e440e336 name=/runtime.v1.RuntimeService/Version
	Apr 10 22:17:56 multinode-824789 crio[2854]: time="2024-04-10 22:17:56.291715647Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ad1628b1-8cce-4134-9e81-0a34fb94402c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:17:56 multinode-824789 crio[2854]: time="2024-04-10 22:17:56.292222946Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712787476292195755,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ad1628b1-8cce-4134-9e81-0a34fb94402c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:17:56 multinode-824789 crio[2854]: time="2024-04-10 22:17:56.292882283Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=82de953b-90f7-4add-a592-1075e7438a13 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:17:56 multinode-824789 crio[2854]: time="2024-04-10 22:17:56.293019762Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=82de953b-90f7-4add-a592-1075e7438a13 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:17:56 multinode-824789 crio[2854]: time="2024-04-10 22:17:56.293417757Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b69ab1eeb616d179fe5a8784376b2875967c0592e582f61eea38471c80e3e84,PodSandboxId:ab19cca9130746bdf30fe7833dd218299d58facbdcb869c0bfd99da0473bb785,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712787427888182402,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-k2ds9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f84d3580-83d9-497d-bc27-9d1112849093,},Annotations:map[string]string{io.kubernetes.container.hash: c5c43c7e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1452163a5519628d53512c6cdfa710d4393fba40d50434f11f2e79a552f23512,PodSandboxId:c50458e2b81378eb737e89c103b2eb1f14cca493f4e6be985045ad1e173d463f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712787394376623525,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wtnkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7169290a-557c-4861-8ecd-e2a0b2c0b290,},Annotations:map[string]string{io.kubernetes.container.hash: 18c7ea1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95fa99a9b394acfb70cf2d3bc625515f04ac5bbaf1e83ce8ed837895f8ed2711,PodSandboxId:8fc64b964d3c6debfb4197aab2b5454bcbbe22981c74b15086aa8bc000bec36e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712787394236845048,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-q2q8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e335e4d5-f65f-4722-b2c1-60e22cd08383,},Annotations:map[string]string{io.kubernetes.container.hash: 4b782acf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9766617e949fbcca21ed32d98fe5425705562bbf2b80ced264099a5262049093,PodSandboxId:d0bb5e97176be6754b1a25e9382b9af152484d18499d23f2c607e90826f1faf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712787394149865567,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e571cab5-3579-4616-90f8-a9c465e70ace,},An
notations:map[string]string{io.kubernetes.container.hash: c59df8f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153b13801dcbfa0b0df8df6c049f8c0b02d3726f6fca41e1d3375d394d55c529,PodSandboxId:a4727b3c277b2644be6addc77a5f5ee7f174daf03a364aed4120671cf62f5e3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712787394141758693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jczhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bc151d6-2081-4f28-80d9-f5bbc795697e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 47cf10f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:539e39c1eb16e404b9f016c66bfa0a50882f7a3f450a45b5430e466e766c4d1a,PodSandboxId:7657c2ef27ad1e2c2c39ceadc957c9aa5b99c3b4931db10099ff33156b8b02d4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712787390419753041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b6548b0f76d3607d58faa9b3e608948,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f09dfe1ad20f92ced33fc247582ae9805c5208dcfdbbb61996b36c12d765d0f9,PodSandboxId:2629bdf637ddfa1fcc4a0230b1b72bc7b8f3ac51064234c275de75f54c098810,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712787390369164206,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800bfb120fc35f1c411b49e7bd24fc4,},Annotations:map[string]string{io.kub
ernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9024d499796903211265b58e900d4530ff4d8f95c482563d1fc88b6a568e3909,PodSandboxId:cd1042b2912ee16daf10d32a2b4062812d624599cc0367c7d719e0a669e27a52,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712787390378869061,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12b4e0e3d4dfd3581ea04dc539f54186,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7f29e9a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e61f191e2e49cdf6315e1e237ffb6d7db9738e9a42cf5ba7ee189377861f57,PodSandboxId:cb438d7c83c336ca9de1cca90cabe7562df21c8e04b46646ba3dc228e6c75c27,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712787390348573076,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b2c1d24c176a5f0fdc05076676f83e4,},Annotations:map[string]string{io.kubernetes.container.hash: 54acc59a,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7fb0eb1db503999354fe6d2250ddc1eb8b4a81807d10d1f2074ee34c0f60b7,PodSandboxId:d4bf1d7c408122c41e63d9e84a828c448900e752841e54344d17667ed7cbc693,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712787086478490953,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-k2ds9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f84d3580-83d9-497d-bc27-9d1112849093,},Annotations:map[string]string{io.kubernetes.container.hash: c5c43c7e,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:559d8ae61200e3ba5d2a71f3c2058d4f2b1af0bedb839a2a8271d366e75a24fa,PodSandboxId:97fdf93300610640a55f8d9f26679f11ab52b106d47a3c55393773a007cbdbce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712787040069753134,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e571cab5-3579-4616-90f8-a9c465e70ace,},Annotations:map[string]string{io.kubernetes.container.hash: c59df8f7,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7dc29ebd6ee440c1b6ce074966c0ed7caf78253e19d0f5232e7950c151fc4ce,PodSandboxId:8bbc7f26b3f24c6861865c8af008cf9dd186259b174c0903892d9620a11fa194,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712787039290453019,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-q2q8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e335e4d5-f65f-4722-b2c1-60e22cd08383,},Annotations:map[string]string{io.kubernetes.container.hash: 4b782acf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b912245ff19931ce5c9e4bc61ff0b004c72ae768f856684259bfb4db2fb768b,PodSandboxId:9a40c2487b0b89099cb2ad8a18821f8a5444c70f640b534d814587812abcbc1e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712787037548947345,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wtnkq,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 7169290a-557c-4861-8ecd-e2a0b2c0b290,},Annotations:map[string]string{io.kubernetes.container.hash: 18c7ea1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0d4dd9273967b484f51f92f7e12ada0b0ca391d33518f7cdc8a9ded534e23c,PodSandboxId:a4899072a08fffc872259437814850aecb1970fc1a147fae53163f8bc6ae6e65,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712787037388503224,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jczhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bc151d6-2081-4f28-80d9
-f5bbc795697e,},Annotations:map[string]string{io.kubernetes.container.hash: 47cf10f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2541b56a95637bf11f9ee6531105e8bdfd19cae8b2b4e1c40a718f501ba9904a,PodSandboxId:c70ffd4456f7d99f07f71ea04094fe095d660779f4ccd28c1a07809a6301fd5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712787018157395824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800bfb
120fc35f1c411b49e7bd24fc4,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33e5663b850f38138c30aad20177833de666a6e18c80807aeffee6fa023c2660,PodSandboxId:56ccb96fb9f1e89f8fb395d97342853122bd17c5680301c02a718eeb92d6f288,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712787018142947435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b2c1d24c176a5f0fdc05076676f83e4,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 54acc59a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf4abb7ad40ea21bcf22323179f9b10801504264e307d41d755d3e5f8b8e5e9,PodSandboxId:e55cc501e3962a8f290d13b7b6163694efdf66a722479c70c66c3181349addbd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712787018144622016,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b6548b0f76d3607d58faa9b3e608948,},Annotations:map[string]string{io
.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8486ace19c17159f5709414d67c7af1364c7de18561079ec6a14f197eb911515,PodSandboxId:136bc181084da378703a45f321e6e2b0e3b00a5fa6706a8f58b77d519a60fd70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712787018134560832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12b4e0e3d4dfd3581ea04dc539f54186,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7f29e9a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=82de953b-90f7-4add-a592-1075e7438a13 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:17:56 multinode-824789 crio[2854]: time="2024-04-10 22:17:56.346463794Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=94704230-bd48-4d70-8472-a3af5aa0768a name=/runtime.v1.RuntimeService/Version
	Apr 10 22:17:56 multinode-824789 crio[2854]: time="2024-04-10 22:17:56.346539880Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=94704230-bd48-4d70-8472-a3af5aa0768a name=/runtime.v1.RuntimeService/Version
	Apr 10 22:17:56 multinode-824789 crio[2854]: time="2024-04-10 22:17:56.348327113Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=44342f71-8add-4100-a35e-c932f5f14b3c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:17:56 multinode-824789 crio[2854]: time="2024-04-10 22:17:56.348881876Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712787476348854378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=44342f71-8add-4100-a35e-c932f5f14b3c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:17:56 multinode-824789 crio[2854]: time="2024-04-10 22:17:56.350612063Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2a673b06-67fd-431e-8a47-937f85611a76 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:17:56 multinode-824789 crio[2854]: time="2024-04-10 22:17:56.350688368Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2a673b06-67fd-431e-8a47-937f85611a76 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:17:56 multinode-824789 crio[2854]: time="2024-04-10 22:17:56.351105156Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b69ab1eeb616d179fe5a8784376b2875967c0592e582f61eea38471c80e3e84,PodSandboxId:ab19cca9130746bdf30fe7833dd218299d58facbdcb869c0bfd99da0473bb785,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712787427888182402,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-k2ds9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f84d3580-83d9-497d-bc27-9d1112849093,},Annotations:map[string]string{io.kubernetes.container.hash: c5c43c7e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1452163a5519628d53512c6cdfa710d4393fba40d50434f11f2e79a552f23512,PodSandboxId:c50458e2b81378eb737e89c103b2eb1f14cca493f4e6be985045ad1e173d463f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712787394376623525,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wtnkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7169290a-557c-4861-8ecd-e2a0b2c0b290,},Annotations:map[string]string{io.kubernetes.container.hash: 18c7ea1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95fa99a9b394acfb70cf2d3bc625515f04ac5bbaf1e83ce8ed837895f8ed2711,PodSandboxId:8fc64b964d3c6debfb4197aab2b5454bcbbe22981c74b15086aa8bc000bec36e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712787394236845048,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-q2q8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e335e4d5-f65f-4722-b2c1-60e22cd08383,},Annotations:map[string]string{io.kubernetes.container.hash: 4b782acf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9766617e949fbcca21ed32d98fe5425705562bbf2b80ced264099a5262049093,PodSandboxId:d0bb5e97176be6754b1a25e9382b9af152484d18499d23f2c607e90826f1faf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712787394149865567,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e571cab5-3579-4616-90f8-a9c465e70ace,},An
notations:map[string]string{io.kubernetes.container.hash: c59df8f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153b13801dcbfa0b0df8df6c049f8c0b02d3726f6fca41e1d3375d394d55c529,PodSandboxId:a4727b3c277b2644be6addc77a5f5ee7f174daf03a364aed4120671cf62f5e3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712787394141758693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jczhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bc151d6-2081-4f28-80d9-f5bbc795697e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 47cf10f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:539e39c1eb16e404b9f016c66bfa0a50882f7a3f450a45b5430e466e766c4d1a,PodSandboxId:7657c2ef27ad1e2c2c39ceadc957c9aa5b99c3b4931db10099ff33156b8b02d4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712787390419753041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b6548b0f76d3607d58faa9b3e608948,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f09dfe1ad20f92ced33fc247582ae9805c5208dcfdbbb61996b36c12d765d0f9,PodSandboxId:2629bdf637ddfa1fcc4a0230b1b72bc7b8f3ac51064234c275de75f54c098810,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712787390369164206,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800bfb120fc35f1c411b49e7bd24fc4,},Annotations:map[string]string{io.kub
ernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9024d499796903211265b58e900d4530ff4d8f95c482563d1fc88b6a568e3909,PodSandboxId:cd1042b2912ee16daf10d32a2b4062812d624599cc0367c7d719e0a669e27a52,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712787390378869061,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12b4e0e3d4dfd3581ea04dc539f54186,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7f29e9a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e61f191e2e49cdf6315e1e237ffb6d7db9738e9a42cf5ba7ee189377861f57,PodSandboxId:cb438d7c83c336ca9de1cca90cabe7562df21c8e04b46646ba3dc228e6c75c27,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712787390348573076,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b2c1d24c176a5f0fdc05076676f83e4,},Annotations:map[string]string{io.kubernetes.container.hash: 54acc59a,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7fb0eb1db503999354fe6d2250ddc1eb8b4a81807d10d1f2074ee34c0f60b7,PodSandboxId:d4bf1d7c408122c41e63d9e84a828c448900e752841e54344d17667ed7cbc693,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712787086478490953,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-k2ds9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f84d3580-83d9-497d-bc27-9d1112849093,},Annotations:map[string]string{io.kubernetes.container.hash: c5c43c7e,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:559d8ae61200e3ba5d2a71f3c2058d4f2b1af0bedb839a2a8271d366e75a24fa,PodSandboxId:97fdf93300610640a55f8d9f26679f11ab52b106d47a3c55393773a007cbdbce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712787040069753134,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e571cab5-3579-4616-90f8-a9c465e70ace,},Annotations:map[string]string{io.kubernetes.container.hash: c59df8f7,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7dc29ebd6ee440c1b6ce074966c0ed7caf78253e19d0f5232e7950c151fc4ce,PodSandboxId:8bbc7f26b3f24c6861865c8af008cf9dd186259b174c0903892d9620a11fa194,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712787039290453019,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-q2q8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e335e4d5-f65f-4722-b2c1-60e22cd08383,},Annotations:map[string]string{io.kubernetes.container.hash: 4b782acf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b912245ff19931ce5c9e4bc61ff0b004c72ae768f856684259bfb4db2fb768b,PodSandboxId:9a40c2487b0b89099cb2ad8a18821f8a5444c70f640b534d814587812abcbc1e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712787037548947345,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wtnkq,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 7169290a-557c-4861-8ecd-e2a0b2c0b290,},Annotations:map[string]string{io.kubernetes.container.hash: 18c7ea1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0d4dd9273967b484f51f92f7e12ada0b0ca391d33518f7cdc8a9ded534e23c,PodSandboxId:a4899072a08fffc872259437814850aecb1970fc1a147fae53163f8bc6ae6e65,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712787037388503224,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jczhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bc151d6-2081-4f28-80d9
-f5bbc795697e,},Annotations:map[string]string{io.kubernetes.container.hash: 47cf10f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2541b56a95637bf11f9ee6531105e8bdfd19cae8b2b4e1c40a718f501ba9904a,PodSandboxId:c70ffd4456f7d99f07f71ea04094fe095d660779f4ccd28c1a07809a6301fd5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712787018157395824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800bfb
120fc35f1c411b49e7bd24fc4,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33e5663b850f38138c30aad20177833de666a6e18c80807aeffee6fa023c2660,PodSandboxId:56ccb96fb9f1e89f8fb395d97342853122bd17c5680301c02a718eeb92d6f288,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712787018142947435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b2c1d24c176a5f0fdc05076676f83e4,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 54acc59a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf4abb7ad40ea21bcf22323179f9b10801504264e307d41d755d3e5f8b8e5e9,PodSandboxId:e55cc501e3962a8f290d13b7b6163694efdf66a722479c70c66c3181349addbd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712787018144622016,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b6548b0f76d3607d58faa9b3e608948,},Annotations:map[string]string{io
.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8486ace19c17159f5709414d67c7af1364c7de18561079ec6a14f197eb911515,PodSandboxId:136bc181084da378703a45f321e6e2b0e3b00a5fa6706a8f58b77d519a60fd70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712787018134560832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12b4e0e3d4dfd3581ea04dc539f54186,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7f29e9a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2a673b06-67fd-431e-8a47-937f85611a76 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0b69ab1eeb616       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      48 seconds ago       Running             busybox                   1                   ab19cca913074       busybox-7fdf7869d9-k2ds9
	1452163a55196       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               1                   c50458e2b8137       kindnet-wtnkq
	95fa99a9b394a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   8fc64b964d3c6       coredns-76f75df574-q2q8c
	9766617e949fb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   d0bb5e97176be       storage-provisioner
	153b13801dcbf       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      About a minute ago   Running             kube-proxy                1                   a4727b3c277b2       kube-proxy-jczhc
	539e39c1eb16e       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      About a minute ago   Running             kube-scheduler            1                   7657c2ef27ad1       kube-scheduler-multinode-824789
	9024d49979690       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      About a minute ago   Running             kube-apiserver            1                   cd1042b2912ee       kube-apiserver-multinode-824789
	f09dfe1ad20f9       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      About a minute ago   Running             kube-controller-manager   1                   2629bdf637ddf       kube-controller-manager-multinode-824789
	34e61f191e2e4       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   cb438d7c83c33       etcd-multinode-824789
	3f7fb0eb1db50       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   d4bf1d7c40812       busybox-7fdf7869d9-k2ds9
	559d8ae61200e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   97fdf93300610       storage-provisioner
	c7dc29ebd6ee4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   8bbc7f26b3f24       coredns-76f75df574-q2q8c
	6b912245ff199       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago        Exited              kindnet-cni               0                   9a40c2487b0b8       kindnet-wtnkq
	6d0d4dd927396       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      7 minutes ago        Exited              kube-proxy                0                   a4899072a08ff       kube-proxy-jczhc
	2541b56a95637       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      7 minutes ago        Exited              kube-controller-manager   0                   c70ffd4456f7d       kube-controller-manager-multinode-824789
	cbf4abb7ad40e       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      7 minutes ago        Exited              kube-scheduler            0                   e55cc501e3962       kube-scheduler-multinode-824789
	33e5663b850f3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago        Exited              etcd                      0                   56ccb96fb9f1e       etcd-multinode-824789
	8486ace19c171       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      7 minutes ago        Exited              kube-apiserver            0                   136bc181084da       kube-apiserver-multinode-824789
	
	
	==> coredns [95fa99a9b394acfb70cf2d3bc625515f04ac5bbaf1e83ce8ed837895f8ed2711] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:34816 - 37372 "HINFO IN 6222664666433173775.8478308336439852750. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012972412s
	
	
	==> coredns [c7dc29ebd6ee440c1b6ce074966c0ed7caf78253e19d0f5232e7950c151fc4ce] <==
	[INFO] 10.244.0.3:38854 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001791375s
	[INFO] 10.244.0.3:42513 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118794s
	[INFO] 10.244.0.3:48278 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000202941s
	[INFO] 10.244.0.3:51443 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001217049s
	[INFO] 10.244.0.3:45968 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000032888s
	[INFO] 10.244.0.3:51559 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000184773s
	[INFO] 10.244.0.3:35257 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000050636s
	[INFO] 10.244.1.2:48719 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160086s
	[INFO] 10.244.1.2:33455 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116449s
	[INFO] 10.244.1.2:47230 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104791s
	[INFO] 10.244.1.2:59959 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008388s
	[INFO] 10.244.0.3:52061 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176839s
	[INFO] 10.244.0.3:33997 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092769s
	[INFO] 10.244.0.3:58215 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007837s
	[INFO] 10.244.0.3:50061 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076373s
	[INFO] 10.244.1.2:55978 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000228426s
	[INFO] 10.244.1.2:50575 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000177548s
	[INFO] 10.244.1.2:39720 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000271686s
	[INFO] 10.244.1.2:45401 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000134787s
	[INFO] 10.244.0.3:45840 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113495s
	[INFO] 10.244.0.3:36486 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000045624s
	[INFO] 10.244.0.3:60591 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071447s
	[INFO] 10.244.0.3:39383 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000057031s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-824789
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-824789
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2
	                    minikube.k8s.io/name=multinode-824789
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_10T22_10_24_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Apr 2024 22:10:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-824789
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Apr 2024 22:17:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Apr 2024 22:16:33 +0000   Wed, 10 Apr 2024 22:10:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Apr 2024 22:16:33 +0000   Wed, 10 Apr 2024 22:10:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Apr 2024 22:16:33 +0000   Wed, 10 Apr 2024 22:10:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Apr 2024 22:16:33 +0000   Wed, 10 Apr 2024 22:10:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.94
	  Hostname:    multinode-824789
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e962647442c84f0e870f4be227995ec1
	  System UUID:                e9626474-42c8-4f0e-870f-4be227995ec1
	  Boot ID:                    951c22ea-9250-4433-b6ed-61a6ed09bb24
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-k2ds9                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m33s
	  kube-system                 coredns-76f75df574-q2q8c                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m20s
	  kube-system                 etcd-multinode-824789                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m32s
	  kube-system                 kindnet-wtnkq                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m20s
	  kube-system                 kube-apiserver-multinode-824789             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m32s
	  kube-system                 kube-controller-manager-multinode-824789    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m32s
	  kube-system                 kube-proxy-jczhc                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m20s
	  kube-system                 kube-scheduler-multinode-824789             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m32s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m18s                  kube-proxy       
	  Normal  Starting                 82s                    kube-proxy       
	  Normal  Starting                 7m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m39s (x8 over 7m39s)  kubelet          Node multinode-824789 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m39s (x8 over 7m39s)  kubelet          Node multinode-824789 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m39s (x7 over 7m39s)  kubelet          Node multinode-824789 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     7m32s                  kubelet          Node multinode-824789 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  7m32s                  kubelet          Node multinode-824789 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m32s                  kubelet          Node multinode-824789 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  7m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m32s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m21s                  node-controller  Node multinode-824789 event: Registered Node multinode-824789 in Controller
	  Normal  NodeReady                7m18s                  kubelet          Node multinode-824789 status is now: NodeReady
	  Normal  Starting                 87s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  87s (x8 over 87s)      kubelet          Node multinode-824789 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    87s (x8 over 87s)      kubelet          Node multinode-824789 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     87s (x7 over 87s)      kubelet          Node multinode-824789 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  87s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           70s                    node-controller  Node multinode-824789 event: Registered Node multinode-824789 in Controller
	
	
	Name:               multinode-824789-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-824789-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2
	                    minikube.k8s.io/name=multinode-824789
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_10T22_17_16_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Apr 2024 22:17:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-824789-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Apr 2024 22:17:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Apr 2024 22:17:46 +0000   Wed, 10 Apr 2024 22:17:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Apr 2024 22:17:46 +0000   Wed, 10 Apr 2024 22:17:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Apr 2024 22:17:46 +0000   Wed, 10 Apr 2024 22:17:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Apr 2024 22:17:46 +0000   Wed, 10 Apr 2024 22:17:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.85
	  Hostname:    multinode-824789-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a7f81c6e777f43cf97aef7d828e06ed9
	  System UUID:                a7f81c6e-777f-43cf-97ae-f7d828e06ed9
	  Boot ID:                    7bcfe80e-c21d-4735-a54e-8f4150c58e96
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-7p7kp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  kube-system                 kindnet-4dcbv               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m44s
	  kube-system                 kube-proxy-qvf7k            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m40s                  kube-proxy  
	  Normal  Starting                 37s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  6m45s (x2 over 6m45s)  kubelet     Node multinode-824789-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m45s (x2 over 6m45s)  kubelet     Node multinode-824789-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m45s (x2 over 6m45s)  kubelet     Node multinode-824789-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m44s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m35s                  kubelet     Node multinode-824789-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  41s (x2 over 41s)      kubelet     Node multinode-824789-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s (x2 over 41s)      kubelet     Node multinode-824789-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s (x2 over 41s)      kubelet     Node multinode-824789-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  41s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                33s                    kubelet     Node multinode-824789-m02 status is now: NodeReady
	
	
	Name:               multinode-824789-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-824789-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2
	                    minikube.k8s.io/name=multinode-824789
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_10T22_17_45_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Apr 2024 22:17:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-824789-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Apr 2024 22:17:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Apr 2024 22:17:53 +0000   Wed, 10 Apr 2024 22:17:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Apr 2024 22:17:53 +0000   Wed, 10 Apr 2024 22:17:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Apr 2024 22:17:53 +0000   Wed, 10 Apr 2024 22:17:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Apr 2024 22:17:53 +0000   Wed, 10 Apr 2024 22:17:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.224
	  Hostname:    multinode-824789-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5e1363fb4721412e869db433cf0fbb7d
	  System UUID:                5e1363fb-4721-412e-869d-b433cf0fbb7d
	  Boot ID:                    02bf795f-501a-4284-b486-f3ce4dcde7b4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-rwtsd       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m58s
	  kube-system                 kube-proxy-jtd5w    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m53s                  kube-proxy  
	  Normal  Starting                 7s                     kube-proxy  
	  Normal  Starting                 5m12s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  5m58s (x2 over 5m58s)  kubelet     Node multinode-824789-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m58s (x2 over 5m58s)  kubelet     Node multinode-824789-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m58s (x2 over 5m58s)  kubelet     Node multinode-824789-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m58s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m48s                  kubelet     Node multinode-824789-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m17s (x2 over 5m17s)  kubelet     Node multinode-824789-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m17s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m17s (x2 over 5m17s)  kubelet     Node multinode-824789-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m17s (x2 over 5m17s)  kubelet     Node multinode-824789-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m8s                   kubelet     Node multinode-824789-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  12s (x2 over 12s)      kubelet     Node multinode-824789-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x2 over 12s)      kubelet     Node multinode-824789-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x2 over 12s)      kubelet     Node multinode-824789-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-824789-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.057339] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059586] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.200265] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.122407] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.287304] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.481215] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +0.062532] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.878711] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +1.197943] kauditd_printk_skb: 77 callbacks suppressed
	[  +5.619447] systemd-fstab-generator[1285]: Ignoring "noauto" option for root device
	[  +0.079883] kauditd_printk_skb: 10 callbacks suppressed
	[ +13.073193] systemd-fstab-generator[1475]: Ignoring "noauto" option for root device
	[  +0.115244] kauditd_printk_skb: 21 callbacks suppressed
	[Apr10 22:11] kauditd_printk_skb: 84 callbacks suppressed
	[Apr10 22:16] systemd-fstab-generator[2772]: Ignoring "noauto" option for root device
	[  +0.153212] systemd-fstab-generator[2784]: Ignoring "noauto" option for root device
	[  +0.181027] systemd-fstab-generator[2799]: Ignoring "noauto" option for root device
	[  +0.158222] systemd-fstab-generator[2811]: Ignoring "noauto" option for root device
	[  +0.286379] systemd-fstab-generator[2839]: Ignoring "noauto" option for root device
	[  +0.772835] systemd-fstab-generator[2940]: Ignoring "noauto" option for root device
	[  +1.897747] systemd-fstab-generator[3065]: Ignoring "noauto" option for root device
	[  +4.677814] kauditd_printk_skb: 184 callbacks suppressed
	[ +12.598612] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.015694] systemd-fstab-generator[3883]: Ignoring "noauto" option for root device
	[Apr10 22:17] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [33e5663b850f38138c30aad20177833de666a6e18c80807aeffee6fa023c2660] <==
	{"level":"info","ts":"2024-04-10T22:11:12.164374Z","caller":"traceutil/trace.go:171","msg":"trace[905070006] linearizableReadLoop","detail":"{readStateIndex:490; appliedIndex:489; }","duration":"249.037662ms","start":"2024-04-10T22:11:11.915327Z","end":"2024-04-10T22:11:12.164365Z","steps":["trace[905070006] 'read index received'  (duration: 242.951588ms)","trace[905070006] 'applied index is now lower than readState.Index'  (duration: 6.085398ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-10T22:11:12.164557Z","caller":"traceutil/trace.go:171","msg":"trace[1671387732] transaction","detail":"{read_only:false; response_revision:474; number_of_response:1; }","duration":"246.140207ms","start":"2024-04-10T22:11:11.91841Z","end":"2024-04-10T22:11:12.16455Z","steps":["trace[1671387732] 'process raft request'  (duration: 245.262977ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-10T22:11:12.164854Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"249.45907ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/multinode-824789-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-10T22:11:12.164915Z","caller":"traceutil/trace.go:171","msg":"trace[1285588162] range","detail":"{range_begin:/registry/csinodes/multinode-824789-m02; range_end:; response_count:0; response_revision:475; }","duration":"249.604557ms","start":"2024-04-10T22:11:11.915304Z","end":"2024-04-10T22:11:12.164908Z","steps":["trace[1285588162] 'agreement among raft nodes before linearized reading'  (duration: 249.465521ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-10T22:11:12.165108Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"234.81623ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-10T22:11:12.165158Z","caller":"traceutil/trace.go:171","msg":"trace[1109851085] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:475; }","duration":"234.891548ms","start":"2024-04-10T22:11:11.930258Z","end":"2024-04-10T22:11:12.16515Z","steps":["trace[1109851085] 'agreement among raft nodes before linearized reading'  (duration: 234.824663ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-10T22:11:12.165393Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.073214ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-node-lease/multinode-824789-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-10T22:11:12.167155Z","caller":"traceutil/trace.go:171","msg":"trace[1742958869] range","detail":"{range_begin:/registry/leases/kube-node-lease/multinode-824789-m02; range_end:; response_count:0; response_revision:475; }","duration":"103.854468ms","start":"2024-04-10T22:11:12.063291Z","end":"2024-04-10T22:11:12.167146Z","steps":["trace[1742958869] 'agreement among raft nodes before linearized reading'  (duration: 102.07959ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-10T22:11:13.567811Z","caller":"traceutil/trace.go:171","msg":"trace[2079496901] transaction","detail":"{read_only:false; response_revision:502; number_of_response:1; }","duration":"237.282322ms","start":"2024-04-10T22:11:13.330512Z","end":"2024-04-10T22:11:13.567794Z","steps":["trace[2079496901] 'process raft request'  (duration: 237.087385ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-10T22:11:58.662778Z","caller":"traceutil/trace.go:171","msg":"trace[2059953688] linearizableReadLoop","detail":"{readStateIndex:634; appliedIndex:632; }","duration":"187.507947ms","start":"2024-04-10T22:11:58.475253Z","end":"2024-04-10T22:11:58.662761Z","steps":["trace[2059953688] 'read index received'  (duration: 186.643266ms)","trace[2059953688] 'applied index is now lower than readState.Index'  (duration: 863.895µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-10T22:11:58.662871Z","caller":"traceutil/trace.go:171","msg":"trace[1302572631] transaction","detail":"{read_only:false; response_revision:603; number_of_response:1; }","duration":"187.787877ms","start":"2024-04-10T22:11:58.475077Z","end":"2024-04-10T22:11:58.662865Z","steps":["trace[1302572631] 'process raft request'  (duration: 187.631288ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-10T22:11:58.663094Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.773588ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/multinode-824789-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-10T22:11:58.663141Z","caller":"traceutil/trace.go:171","msg":"trace[133383368] range","detail":"{range_begin:/registry/csinodes/multinode-824789-m03; range_end:; response_count:0; response_revision:603; }","duration":"187.899202ms","start":"2024-04-10T22:11:58.475232Z","end":"2024-04-10T22:11:58.663132Z","steps":["trace[133383368] 'agreement among raft nodes before linearized reading'  (duration: 187.754386ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-10T22:11:58.662783Z","caller":"traceutil/trace.go:171","msg":"trace[234368257] transaction","detail":"{read_only:false; response_revision:602; number_of_response:1; }","duration":"221.431262ms","start":"2024-04-10T22:11:58.441338Z","end":"2024-04-10T22:11:58.66277Z","steps":["trace[234368257] 'process raft request'  (duration: 220.593512ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-10T22:14:54.569604Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-10T22:14:54.569737Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-824789","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.94:2380"],"advertise-client-urls":["https://192.168.39.94:2379"]}
	{"level":"warn","ts":"2024-04-10T22:14:54.590444Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-10T22:14:54.590661Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/04/10 22:14:54 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-10T22:14:54.646648Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.94:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-10T22:14:54.64683Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.94:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-10T22:14:54.646963Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c23cd90330b5fc4f","current-leader-member-id":"c23cd90330b5fc4f"}
	{"level":"info","ts":"2024-04-10T22:14:54.650269Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.94:2380"}
	{"level":"info","ts":"2024-04-10T22:14:54.650621Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.94:2380"}
	{"level":"info","ts":"2024-04-10T22:14:54.650724Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-824789","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.94:2380"],"advertise-client-urls":["https://192.168.39.94:2379"]}
	
	
	==> etcd [34e61f191e2e49cdf6315e1e237ffb6d7db9738e9a42cf5ba7ee189377861f57] <==
	{"level":"info","ts":"2024-04-10T22:16:30.87528Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-10T22:16:30.875307Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-10T22:16:30.875555Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c23cd90330b5fc4f switched to configuration voters=(13996300349686021199)"}
	{"level":"info","ts":"2024-04-10T22:16:30.875643Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f81fab91992620a9","local-member-id":"c23cd90330b5fc4f","added-peer-id":"c23cd90330b5fc4f","added-peer-peer-urls":["https://192.168.39.94:2380"]}
	{"level":"info","ts":"2024-04-10T22:16:30.875775Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f81fab91992620a9","local-member-id":"c23cd90330b5fc4f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-10T22:16:30.875818Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-10T22:16:30.893381Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-10T22:16:30.893605Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"c23cd90330b5fc4f","initial-advertise-peer-urls":["https://192.168.39.94:2380"],"listen-peer-urls":["https://192.168.39.94:2380"],"advertise-client-urls":["https://192.168.39.94:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.94:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-10T22:16:30.893651Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-10T22:16:30.89379Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.94:2380"}
	{"level":"info","ts":"2024-04-10T22:16:30.89382Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.94:2380"}
	{"level":"info","ts":"2024-04-10T22:16:31.904649Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c23cd90330b5fc4f is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-10T22:16:31.904833Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c23cd90330b5fc4f became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-10T22:16:31.904887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c23cd90330b5fc4f received MsgPreVoteResp from c23cd90330b5fc4f at term 2"}
	{"level":"info","ts":"2024-04-10T22:16:31.905147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c23cd90330b5fc4f became candidate at term 3"}
	{"level":"info","ts":"2024-04-10T22:16:31.905182Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c23cd90330b5fc4f received MsgVoteResp from c23cd90330b5fc4f at term 3"}
	{"level":"info","ts":"2024-04-10T22:16:31.9052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c23cd90330b5fc4f became leader at term 3"}
	{"level":"info","ts":"2024-04-10T22:16:31.905211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c23cd90330b5fc4f elected leader c23cd90330b5fc4f at term 3"}
	{"level":"info","ts":"2024-04-10T22:16:31.913193Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-10T22:16:31.915513Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.94:2379"}
	{"level":"info","ts":"2024-04-10T22:16:31.91588Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-10T22:16:31.917609Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-10T22:16:31.920101Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c23cd90330b5fc4f","local-member-attributes":"{Name:multinode-824789 ClientURLs:[https://192.168.39.94:2379]}","request-path":"/0/members/c23cd90330b5fc4f/attributes","cluster-id":"f81fab91992620a9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-10T22:16:31.920317Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-10T22:16:31.920351Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 22:17:56 up 8 min,  0 users,  load average: 0.20, 0.25, 0.17
	Linux multinode-824789 5.10.207 #1 SMP Wed Apr 10 14:57:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1452163a5519628d53512c6cdfa710d4393fba40d50434f11f2e79a552f23512] <==
	I0410 22:17:15.239018       1 main.go:250] Node multinode-824789-m03 has CIDR [10.244.3.0/24] 
	I0410 22:17:25.252904       1 main.go:223] Handling node with IPs: map[192.168.39.94:{}]
	I0410 22:17:25.253024       1 main.go:227] handling current node
	I0410 22:17:25.253142       1 main.go:223] Handling node with IPs: map[192.168.39.85:{}]
	I0410 22:17:25.253169       1 main.go:250] Node multinode-824789-m02 has CIDR [10.244.1.0/24] 
	I0410 22:17:25.253419       1 main.go:223] Handling node with IPs: map[192.168.39.224:{}]
	I0410 22:17:25.253487       1 main.go:250] Node multinode-824789-m03 has CIDR [10.244.3.0/24] 
	I0410 22:17:35.290659       1 main.go:223] Handling node with IPs: map[192.168.39.94:{}]
	I0410 22:17:35.290916       1 main.go:227] handling current node
	I0410 22:17:35.290969       1 main.go:223] Handling node with IPs: map[192.168.39.85:{}]
	I0410 22:17:35.290991       1 main.go:250] Node multinode-824789-m02 has CIDR [10.244.1.0/24] 
	I0410 22:17:35.291226       1 main.go:223] Handling node with IPs: map[192.168.39.224:{}]
	I0410 22:17:35.291264       1 main.go:250] Node multinode-824789-m03 has CIDR [10.244.3.0/24] 
	I0410 22:17:45.311678       1 main.go:223] Handling node with IPs: map[192.168.39.94:{}]
	I0410 22:17:45.311941       1 main.go:227] handling current node
	I0410 22:17:45.311957       1 main.go:223] Handling node with IPs: map[192.168.39.85:{}]
	I0410 22:17:45.311971       1 main.go:250] Node multinode-824789-m02 has CIDR [10.244.1.0/24] 
	I0410 22:17:45.313835       1 main.go:223] Handling node with IPs: map[192.168.39.224:{}]
	I0410 22:17:45.313853       1 main.go:250] Node multinode-824789-m03 has CIDR [10.244.2.0/24] 
	I0410 22:17:55.321157       1 main.go:223] Handling node with IPs: map[192.168.39.94:{}]
	I0410 22:17:55.321202       1 main.go:227] handling current node
	I0410 22:17:55.321213       1 main.go:223] Handling node with IPs: map[192.168.39.85:{}]
	I0410 22:17:55.321219       1 main.go:250] Node multinode-824789-m02 has CIDR [10.244.1.0/24] 
	I0410 22:17:55.321332       1 main.go:223] Handling node with IPs: map[192.168.39.224:{}]
	I0410 22:17:55.321360       1 main.go:250] Node multinode-824789-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [6b912245ff19931ce5c9e4bc61ff0b004c72ae768f856684259bfb4db2fb768b] <==
	I0410 22:14:08.818154       1 main.go:250] Node multinode-824789-m03 has CIDR [10.244.3.0/24] 
	I0410 22:14:18.830724       1 main.go:223] Handling node with IPs: map[192.168.39.94:{}]
	I0410 22:14:18.830779       1 main.go:227] handling current node
	I0410 22:14:18.830798       1 main.go:223] Handling node with IPs: map[192.168.39.85:{}]
	I0410 22:14:18.830806       1 main.go:250] Node multinode-824789-m02 has CIDR [10.244.1.0/24] 
	I0410 22:14:18.830990       1 main.go:223] Handling node with IPs: map[192.168.39.224:{}]
	I0410 22:14:18.831019       1 main.go:250] Node multinode-824789-m03 has CIDR [10.244.3.0/24] 
	I0410 22:14:28.838361       1 main.go:223] Handling node with IPs: map[192.168.39.94:{}]
	I0410 22:14:28.838494       1 main.go:227] handling current node
	I0410 22:14:28.838525       1 main.go:223] Handling node with IPs: map[192.168.39.85:{}]
	I0410 22:14:28.838558       1 main.go:250] Node multinode-824789-m02 has CIDR [10.244.1.0/24] 
	I0410 22:14:28.838727       1 main.go:223] Handling node with IPs: map[192.168.39.224:{}]
	I0410 22:14:28.838784       1 main.go:250] Node multinode-824789-m03 has CIDR [10.244.3.0/24] 
	I0410 22:14:38.848135       1 main.go:223] Handling node with IPs: map[192.168.39.94:{}]
	I0410 22:14:38.848185       1 main.go:227] handling current node
	I0410 22:14:38.848197       1 main.go:223] Handling node with IPs: map[192.168.39.85:{}]
	I0410 22:14:38.848208       1 main.go:250] Node multinode-824789-m02 has CIDR [10.244.1.0/24] 
	I0410 22:14:38.848337       1 main.go:223] Handling node with IPs: map[192.168.39.224:{}]
	I0410 22:14:38.848368       1 main.go:250] Node multinode-824789-m03 has CIDR [10.244.3.0/24] 
	I0410 22:14:48.853864       1 main.go:223] Handling node with IPs: map[192.168.39.94:{}]
	I0410 22:14:48.853996       1 main.go:227] handling current node
	I0410 22:14:48.854024       1 main.go:223] Handling node with IPs: map[192.168.39.85:{}]
	I0410 22:14:48.854160       1 main.go:250] Node multinode-824789-m02 has CIDR [10.244.1.0/24] 
	I0410 22:14:48.854327       1 main.go:223] Handling node with IPs: map[192.168.39.224:{}]
	I0410 22:14:48.854384       1 main.go:250] Node multinode-824789-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [8486ace19c17159f5709414d67c7af1364c7de18561079ec6a14f197eb911515] <==
	I0410 22:10:20.836467       1 shared_informer.go:318] Caches are synced for configmaps
	I0410 22:10:20.836627       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0410 22:10:20.836668       1 aggregator.go:165] initial CRD sync complete...
	I0410 22:10:20.836675       1 autoregister_controller.go:141] Starting autoregister controller
	I0410 22:10:20.836679       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0410 22:10:20.836683       1 cache.go:39] Caches are synced for autoregister controller
	I0410 22:10:20.840529       1 controller.go:624] quota admission added evaluator for: namespaces
	I0410 22:10:20.875557       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0410 22:10:21.729835       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0410 22:10:21.734973       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0410 22:10:21.735115       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0410 22:10:22.439993       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0410 22:10:22.504340       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0410 22:10:22.592651       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0410 22:10:22.612997       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.94]
	I0410 22:10:22.615314       1 controller.go:624] quota admission added evaluator for: endpoints
	I0410 22:10:22.622733       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0410 22:10:22.784336       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0410 22:10:24.001414       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0410 22:10:24.021868       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0410 22:10:24.042644       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0410 22:10:36.540575       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0410 22:10:36.690647       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0410 22:14:54.566379       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0410 22:14:54.598113       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	
	
	==> kube-apiserver [9024d499796903211265b58e900d4530ff4d8f95c482563d1fc88b6a568e3909] <==
	I0410 22:16:33.293499       1 establishing_controller.go:76] Starting EstablishingController
	I0410 22:16:33.293516       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0410 22:16:33.293548       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0410 22:16:33.293563       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0410 22:16:33.370557       1 shared_informer.go:318] Caches are synced for configmaps
	I0410 22:16:33.372474       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0410 22:16:33.373760       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0410 22:16:33.384783       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0410 22:16:33.385018       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0410 22:16:33.385106       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0410 22:16:33.390797       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0410 22:16:33.392274       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0410 22:16:33.392457       1 aggregator.go:165] initial CRD sync complete...
	I0410 22:16:33.392490       1 autoregister_controller.go:141] Starting autoregister controller
	I0410 22:16:33.392513       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0410 22:16:33.392537       1 cache.go:39] Caches are synced for autoregister controller
	I0410 22:16:33.432167       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0410 22:16:34.312789       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0410 22:16:35.634781       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0410 22:16:35.776845       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0410 22:16:35.789982       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0410 22:16:35.868402       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0410 22:16:35.875927       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0410 22:16:46.558732       1 controller.go:624] quota admission added evaluator for: endpoints
	I0410 22:16:46.608948       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2541b56a95637bf11f9ee6531105e8bdfd19cae8b2b4e1c40a718f501ba9904a] <==
	I0410 22:11:27.442361       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="5.477167ms"
	I0410 22:11:27.443224       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="54.389µs"
	I0410 22:11:58.672275       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-824789-m03\" does not exist"
	I0410 22:11:58.672756       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-824789-m02"
	I0410 22:11:58.688883       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-824789-m03" podCIDRs=["10.244.2.0/24"]
	I0410 22:11:58.704396       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-jtd5w"
	I0410 22:11:58.710514       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rwtsd"
	I0410 22:12:00.855131       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-824789-m03"
	I0410 22:12:00.855230       1 event.go:376] "Event occurred" object="multinode-824789-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-824789-m03 event: Registered Node multinode-824789-m03 in Controller"
	I0410 22:12:08.322618       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-824789-m02"
	I0410 22:12:38.442685       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-824789-m02"
	I0410 22:12:39.546615       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-824789-m03\" does not exist"
	I0410 22:12:39.547394       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-824789-m02"
	I0410 22:12:39.557461       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-824789-m03" podCIDRs=["10.244.3.0/24"]
	I0410 22:12:48.751509       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-824789-m03"
	I0410 22:13:25.911256       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-824789-m03"
	I0410 22:13:25.912348       1 event.go:376] "Event occurred" object="multinode-824789-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-824789-m02 status is now: NodeNotReady"
	I0410 22:13:25.935015       1 event.go:376] "Event occurred" object="kube-system/kindnet-4dcbv" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0410 22:13:25.959584       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-qvf7k" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0410 22:13:25.983370       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-6cmbq" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0410 22:13:25.997076       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="14.515619ms"
	I0410 22:13:25.997208       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="68.025µs"
	I0410 22:13:30.996264       1 event.go:376] "Event occurred" object="multinode-824789-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-824789-m03 status is now: NodeNotReady"
	I0410 22:13:31.009130       1 event.go:376] "Event occurred" object="kube-system/kindnet-rwtsd" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0410 22:13:31.021654       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-jtd5w" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-controller-manager [f09dfe1ad20f92ced33fc247582ae9805c5208dcfdbbb61996b36c12d765d0f9] <==
	I0410 22:17:11.284993       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="28.830964ms"
	I0410 22:17:11.297283       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="12.13525ms"
	I0410 22:17:11.297841       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="53.43µs"
	I0410 22:17:11.299631       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="116.032µs"
	I0410 22:17:15.533596       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-824789-m02\" does not exist"
	I0410 22:17:15.534288       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-6cmbq" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-6cmbq"
	I0410 22:17:15.543689       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-824789-m02" podCIDRs=["10.244.1.0/24"]
	I0410 22:17:16.472354       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="206.815µs"
	I0410 22:17:16.502281       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="60.987µs"
	I0410 22:17:16.514474       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="59.446µs"
	I0410 22:17:16.527453       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="74.565µs"
	I0410 22:17:16.527773       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="110.481µs"
	I0410 22:17:16.534885       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="51.542µs"
	I0410 22:17:16.535152       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-6cmbq" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-6cmbq"
	I0410 22:17:23.728415       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-824789-m02"
	I0410 22:17:23.754222       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="49.545µs"
	I0410 22:17:23.778191       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="70.426µs"
	I0410 22:17:26.383204       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-7p7kp" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-7p7kp"
	I0410 22:17:26.671200       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="7.221062ms"
	I0410 22:17:26.672927       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="322.636µs"
	I0410 22:17:42.955885       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-824789-m02"
	I0410 22:17:44.190865       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-824789-m02"
	I0410 22:17:44.191911       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-824789-m03\" does not exist"
	I0410 22:17:44.205339       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-824789-m03" podCIDRs=["10.244.2.0/24"]
	I0410 22:17:53.243949       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-824789-m02"
	
	
	==> kube-proxy [153b13801dcbfa0b0df8df6c049f8c0b02d3726f6fca41e1d3375d394d55c529] <==
	I0410 22:16:34.455156       1 server_others.go:72] "Using iptables proxy"
	I0410 22:16:34.474717       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.94"]
	I0410 22:16:34.553287       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0410 22:16:34.553371       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0410 22:16:34.553396       1 server_others.go:168] "Using iptables Proxier"
	I0410 22:16:34.560640       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0410 22:16:34.560863       1 server.go:865] "Version info" version="v1.29.3"
	I0410 22:16:34.560896       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0410 22:16:34.562367       1 config.go:188] "Starting service config controller"
	I0410 22:16:34.562437       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0410 22:16:34.562470       1 config.go:97] "Starting endpoint slice config controller"
	I0410 22:16:34.562495       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0410 22:16:34.563215       1 config.go:315] "Starting node config controller"
	I0410 22:16:34.563242       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0410 22:16:34.663455       1 shared_informer.go:318] Caches are synced for node config
	I0410 22:16:34.663502       1 shared_informer.go:318] Caches are synced for service config
	I0410 22:16:34.663523       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [6d0d4dd9273967b484f51f92f7e12ada0b0ca391d33518f7cdc8a9ded534e23c] <==
	I0410 22:10:38.022399       1 server_others.go:72] "Using iptables proxy"
	I0410 22:10:38.077411       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.94"]
	I0410 22:10:38.153401       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0410 22:10:38.153422       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0410 22:10:38.153433       1 server_others.go:168] "Using iptables Proxier"
	I0410 22:10:38.157017       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0410 22:10:38.157276       1 server.go:865] "Version info" version="v1.29.3"
	I0410 22:10:38.157288       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0410 22:10:38.161261       1 config.go:188] "Starting service config controller"
	I0410 22:10:38.161489       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0410 22:10:38.161539       1 config.go:97] "Starting endpoint slice config controller"
	I0410 22:10:38.161557       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0410 22:10:38.163177       1 config.go:315] "Starting node config controller"
	I0410 22:10:38.163212       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0410 22:10:38.262573       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0410 22:10:38.262642       1 shared_informer.go:318] Caches are synced for service config
	I0410 22:10:38.265878       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [539e39c1eb16e404b9f016c66bfa0a50882f7a3f450a45b5430e466e766c4d1a] <==
	I0410 22:16:31.601707       1 serving.go:380] Generated self-signed cert in-memory
	W0410 22:16:33.325461       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0410 22:16:33.325503       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0410 22:16:33.325513       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0410 22:16:33.325519       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0410 22:16:33.385554       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0410 22:16:33.387148       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0410 22:16:33.390454       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0410 22:16:33.391166       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0410 22:16:33.391610       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0410 22:16:33.394236       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0410 22:16:33.492111       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [cbf4abb7ad40ea21bcf22323179f9b10801504264e307d41d755d3e5f8b8e5e9] <==
	W0410 22:10:21.692733       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0410 22:10:21.692793       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0410 22:10:21.706469       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0410 22:10:21.706527       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0410 22:10:21.760547       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0410 22:10:21.760607       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0410 22:10:21.844181       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0410 22:10:21.844235       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0410 22:10:21.874811       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0410 22:10:21.875479       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0410 22:10:21.881409       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0410 22:10:21.881513       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0410 22:10:21.905237       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0410 22:10:21.905719       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0410 22:10:22.009228       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0410 22:10:22.009972       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0410 22:10:22.047858       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0410 22:10:22.048623       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0410 22:10:22.192264       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0410 22:10:22.192322       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0410 22:10:24.037077       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0410 22:14:54.566943       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0410 22:14:54.567239       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0410 22:14:54.567501       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0410 22:14:54.589885       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 10 22:16:33 multinode-824789 kubelet[3072]: I0410 22:16:33.591220    3072 topology_manager.go:215] "Topology Admit Handler" podUID="e571cab5-3579-4616-90f8-a9c465e70ace" podNamespace="kube-system" podName="storage-provisioner"
	Apr 10 22:16:33 multinode-824789 kubelet[3072]: I0410 22:16:33.591475    3072 topology_manager.go:215] "Topology Admit Handler" podUID="f84d3580-83d9-497d-bc27-9d1112849093" podNamespace="default" podName="busybox-7fdf7869d9-k2ds9"
	Apr 10 22:16:33 multinode-824789 kubelet[3072]: I0410 22:16:33.595743    3072 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Apr 10 22:16:33 multinode-824789 kubelet[3072]: I0410 22:16:33.634005    3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7169290a-557c-4861-8ecd-e2a0b2c0b290-cni-cfg\") pod \"kindnet-wtnkq\" (UID: \"7169290a-557c-4861-8ecd-e2a0b2c0b290\") " pod="kube-system/kindnet-wtnkq"
	Apr 10 22:16:33 multinode-824789 kubelet[3072]: I0410 22:16:33.634114    3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7169290a-557c-4861-8ecd-e2a0b2c0b290-xtables-lock\") pod \"kindnet-wtnkq\" (UID: \"7169290a-557c-4861-8ecd-e2a0b2c0b290\") " pod="kube-system/kindnet-wtnkq"
	Apr 10 22:16:33 multinode-824789 kubelet[3072]: I0410 22:16:33.634140    3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6bc151d6-2081-4f28-80d9-f5bbc795697e-xtables-lock\") pod \"kube-proxy-jczhc\" (UID: \"6bc151d6-2081-4f28-80d9-f5bbc795697e\") " pod="kube-system/kube-proxy-jczhc"
	Apr 10 22:16:33 multinode-824789 kubelet[3072]: I0410 22:16:33.634175    3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6bc151d6-2081-4f28-80d9-f5bbc795697e-lib-modules\") pod \"kube-proxy-jczhc\" (UID: \"6bc151d6-2081-4f28-80d9-f5bbc795697e\") " pod="kube-system/kube-proxy-jczhc"
	Apr 10 22:16:33 multinode-824789 kubelet[3072]: I0410 22:16:33.634215    3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7169290a-557c-4861-8ecd-e2a0b2c0b290-lib-modules\") pod \"kindnet-wtnkq\" (UID: \"7169290a-557c-4861-8ecd-e2a0b2c0b290\") " pod="kube-system/kindnet-wtnkq"
	Apr 10 22:16:33 multinode-824789 kubelet[3072]: I0410 22:16:33.634256    3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e571cab5-3579-4616-90f8-a9c465e70ace-tmp\") pod \"storage-provisioner\" (UID: \"e571cab5-3579-4616-90f8-a9c465e70ace\") " pod="kube-system/storage-provisioner"
	Apr 10 22:16:35 multinode-824789 kubelet[3072]: I0410 22:16:35.782755    3072 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 10 22:16:41 multinode-824789 kubelet[3072]: I0410 22:16:41.003997    3072 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 10 22:17:29 multinode-824789 kubelet[3072]: E0410 22:17:29.687187    3072 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 10 22:17:29 multinode-824789 kubelet[3072]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 10 22:17:29 multinode-824789 kubelet[3072]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 10 22:17:29 multinode-824789 kubelet[3072]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 10 22:17:29 multinode-824789 kubelet[3072]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 10 22:17:29 multinode-824789 kubelet[3072]: E0410 22:17:29.691126    3072 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod6b6548b0f76d3607d58faa9b3e608948/crio-e55cc501e3962a8f290d13b7b6163694efdf66a722479c70c66c3181349addbd: Error finding container e55cc501e3962a8f290d13b7b6163694efdf66a722479c70c66c3181349addbd: Status 404 returned error can't find the container with id e55cc501e3962a8f290d13b7b6163694efdf66a722479c70c66c3181349addbd
	Apr 10 22:17:29 multinode-824789 kubelet[3072]: E0410 22:17:29.691371    3072 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod12b4e0e3d4dfd3581ea04dc539f54186/crio-136bc181084da378703a45f321e6e2b0e3b00a5fa6706a8f58b77d519a60fd70: Error finding container 136bc181084da378703a45f321e6e2b0e3b00a5fa6706a8f58b77d519a60fd70: Status 404 returned error can't find the container with id 136bc181084da378703a45f321e6e2b0e3b00a5fa6706a8f58b77d519a60fd70
	Apr 10 22:17:29 multinode-824789 kubelet[3072]: E0410 22:17:29.691547    3072 manager.go:1116] Failed to create existing container: /kubepods/burstable/pode335e4d5-f65f-4722-b2c1-60e22cd08383/crio-8bbc7f26b3f24c6861865c8af008cf9dd186259b174c0903892d9620a11fa194: Error finding container 8bbc7f26b3f24c6861865c8af008cf9dd186259b174c0903892d9620a11fa194: Status 404 returned error can't find the container with id 8bbc7f26b3f24c6861865c8af008cf9dd186259b174c0903892d9620a11fa194
	Apr 10 22:17:29 multinode-824789 kubelet[3072]: E0410 22:17:29.691781    3072 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod6bc151d6-2081-4f28-80d9-f5bbc795697e/crio-a4899072a08fffc872259437814850aecb1970fc1a147fae53163f8bc6ae6e65: Error finding container a4899072a08fffc872259437814850aecb1970fc1a147fae53163f8bc6ae6e65: Status 404 returned error can't find the container with id a4899072a08fffc872259437814850aecb1970fc1a147fae53163f8bc6ae6e65
	Apr 10 22:17:29 multinode-824789 kubelet[3072]: E0410 22:17:29.692153    3072 manager.go:1116] Failed to create existing container: /kubepods/pod7169290a-557c-4861-8ecd-e2a0b2c0b290/crio-9a40c2487b0b89099cb2ad8a18821f8a5444c70f640b534d814587812abcbc1e: Error finding container 9a40c2487b0b89099cb2ad8a18821f8a5444c70f640b534d814587812abcbc1e: Status 404 returned error can't find the container with id 9a40c2487b0b89099cb2ad8a18821f8a5444c70f640b534d814587812abcbc1e
	Apr 10 22:17:29 multinode-824789 kubelet[3072]: E0410 22:17:29.692490    3072 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podf84d3580-83d9-497d-bc27-9d1112849093/crio-d4bf1d7c408122c41e63d9e84a828c448900e752841e54344d17667ed7cbc693: Error finding container d4bf1d7c408122c41e63d9e84a828c448900e752841e54344d17667ed7cbc693: Status 404 returned error can't find the container with id d4bf1d7c408122c41e63d9e84a828c448900e752841e54344d17667ed7cbc693
	Apr 10 22:17:29 multinode-824789 kubelet[3072]: E0410 22:17:29.692941    3072 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod2800bfb120fc35f1c411b49e7bd24fc4/crio-c70ffd4456f7d99f07f71ea04094fe095d660779f4ccd28c1a07809a6301fd5f: Error finding container c70ffd4456f7d99f07f71ea04094fe095d660779f4ccd28c1a07809a6301fd5f: Status 404 returned error can't find the container with id c70ffd4456f7d99f07f71ea04094fe095d660779f4ccd28c1a07809a6301fd5f
	Apr 10 22:17:29 multinode-824789 kubelet[3072]: E0410 22:17:29.693204    3072 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pode571cab5-3579-4616-90f8-a9c465e70ace/crio-97fdf93300610640a55f8d9f26679f11ab52b106d47a3c55393773a007cbdbce: Error finding container 97fdf93300610640a55f8d9f26679f11ab52b106d47a3c55393773a007cbdbce: Status 404 returned error can't find the container with id 97fdf93300610640a55f8d9f26679f11ab52b106d47a3c55393773a007cbdbce
	Apr 10 22:17:29 multinode-824789 kubelet[3072]: E0410 22:17:29.693477    3072 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod8b2c1d24c176a5f0fdc05076676f83e4/crio-56ccb96fb9f1e89f8fb395d97342853122bd17c5680301c02a718eeb92d6f288: Error finding container 56ccb96fb9f1e89f8fb395d97342853122bd17c5680301c02a718eeb92d6f288: Status 404 returned error can't find the container with id 56ccb96fb9f1e89f8fb395d97342853122bd17c5680301c02a718eeb92d6f288
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0410 22:17:55.857161   41805 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18610-5679/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-824789 -n multinode-824789
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-824789 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (306.43s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 stop
E0410 22:19:57.160349   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-824789 stop: exit status 82 (2m0.485697831s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-824789-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-824789 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-824789 status: exit status 3 (18.896711833s)

                                                
                                                
-- stdout --
	multinode-824789
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-824789-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0410 22:20:19.788699   42485 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.85:22: connect: no route to host
	E0410 22:20:19.788751   42485 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.85:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-824789 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-824789 -n multinode-824789
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-824789 logs -n 25: (1.579364183s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| ssh     | multinode-824789 ssh -n                                                                 | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | multinode-824789-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-824789 cp multinode-824789-m02:/home/docker/cp-test.txt                       | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | multinode-824789:/home/docker/cp-test_multinode-824789-m02_multinode-824789.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-824789 ssh -n                                                                 | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | multinode-824789-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-824789 ssh -n multinode-824789 sudo cat                                       | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | /home/docker/cp-test_multinode-824789-m02_multinode-824789.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-824789 cp multinode-824789-m02:/home/docker/cp-test.txt                       | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | multinode-824789-m03:/home/docker/cp-test_multinode-824789-m02_multinode-824789-m03.txt |                  |         |                |                     |                     |
	| ssh     | multinode-824789 ssh -n                                                                 | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | multinode-824789-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-824789 ssh -n multinode-824789-m03 sudo cat                                   | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | /home/docker/cp-test_multinode-824789-m02_multinode-824789-m03.txt                      |                  |         |                |                     |                     |
	| cp      | multinode-824789 cp testdata/cp-test.txt                                                | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | multinode-824789-m03:/home/docker/cp-test.txt                                           |                  |         |                |                     |                     |
	| ssh     | multinode-824789 ssh -n                                                                 | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | multinode-824789-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-824789 cp multinode-824789-m03:/home/docker/cp-test.txt                       | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2014130066/001/cp-test_multinode-824789-m03.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-824789 ssh -n                                                                 | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | multinode-824789-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-824789 cp multinode-824789-m03:/home/docker/cp-test.txt                       | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | multinode-824789:/home/docker/cp-test_multinode-824789-m03_multinode-824789.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-824789 ssh -n                                                                 | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | multinode-824789-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-824789 ssh -n multinode-824789 sudo cat                                       | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | /home/docker/cp-test_multinode-824789-m03_multinode-824789.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-824789 cp multinode-824789-m03:/home/docker/cp-test.txt                       | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | multinode-824789-m02:/home/docker/cp-test_multinode-824789-m03_multinode-824789-m02.txt |                  |         |                |                     |                     |
	| ssh     | multinode-824789 ssh -n                                                                 | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | multinode-824789-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-824789 ssh -n multinode-824789-m02 sudo cat                                   | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | /home/docker/cp-test_multinode-824789-m03_multinode-824789-m02.txt                      |                  |         |                |                     |                     |
	| node    | multinode-824789 node stop m03                                                          | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	| node    | multinode-824789 node start                                                             | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC | 10 Apr 24 22:12 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |                |                     |                     |
	| node    | list -p multinode-824789                                                                | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC |                     |
	| stop    | -p multinode-824789                                                                     | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:12 UTC |                     |
	| start   | -p multinode-824789                                                                     | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:14 UTC | 10 Apr 24 22:17 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |                |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |                |                     |                     |
	| node    | list -p multinode-824789                                                                | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:17 UTC |                     |
	| node    | multinode-824789 node delete                                                            | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:17 UTC | 10 Apr 24 22:17 UTC |
	|         | m03                                                                                     |                  |         |                |                     |                     |
	| stop    | multinode-824789 stop                                                                   | multinode-824789 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:18 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/10 22:14:53
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0410 22:14:53.628490   40336 out.go:291] Setting OutFile to fd 1 ...
	I0410 22:14:53.628629   40336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:14:53.628649   40336 out.go:304] Setting ErrFile to fd 2...
	I0410 22:14:53.628653   40336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:14:53.628881   40336 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 22:14:53.629430   40336 out.go:298] Setting JSON to false
	I0410 22:14:53.630334   40336 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3436,"bootTime":1712783858,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0410 22:14:53.630391   40336 start.go:139] virtualization: kvm guest
	I0410 22:14:53.632834   40336 out.go:177] * [multinode-824789] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0410 22:14:53.634717   40336 out.go:177]   - MINIKUBE_LOCATION=18610
	I0410 22:14:53.634681   40336 notify.go:220] Checking for updates...
	I0410 22:14:53.637404   40336 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0410 22:14:53.638739   40336 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:14:53.639991   40336 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 22:14:53.641393   40336 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0410 22:14:53.642680   40336 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0410 22:14:53.644681   40336 config.go:182] Loaded profile config "multinode-824789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:14:53.644829   40336 driver.go:392] Setting default libvirt URI to qemu:///system
	I0410 22:14:53.645345   40336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:14:53.645392   40336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:14:53.660256   40336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43223
	I0410 22:14:53.660691   40336 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:14:53.661293   40336 main.go:141] libmachine: Using API Version  1
	I0410 22:14:53.661312   40336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:14:53.661579   40336 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:14:53.661763   40336 main.go:141] libmachine: (multinode-824789) Calling .DriverName
	I0410 22:14:53.696461   40336 out.go:177] * Using the kvm2 driver based on existing profile
	I0410 22:14:53.698019   40336 start.go:297] selected driver: kvm2
	I0410 22:14:53.698038   40336 start.go:901] validating driver "kvm2" against &{Name:multinode-824789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.29.3 ClusterName:multinode-824789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.85 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.224 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:14:53.698231   40336 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0410 22:14:53.698669   40336 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 22:14:53.698748   40336 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18610-5679/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0410 22:14:53.713567   40336 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0410 22:14:53.714442   40336 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 22:14:53.714527   40336 cni.go:84] Creating CNI manager for ""
	I0410 22:14:53.714543   40336 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0410 22:14:53.714706   40336 start.go:340] cluster config:
	{Name:multinode-824789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-824789 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.85 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.224 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:14:53.714936   40336 iso.go:125] acquiring lock: {Name:mk5a268a8a4dbf02629446a098c1de60574dce10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 22:14:53.717727   40336 out.go:177] * Starting "multinode-824789" primary control-plane node in "multinode-824789" cluster
	I0410 22:14:53.719399   40336 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 22:14:53.719434   40336 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0410 22:14:53.719450   40336 cache.go:56] Caching tarball of preloaded images
	I0410 22:14:53.719522   40336 preload.go:173] Found /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0410 22:14:53.719533   40336 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0410 22:14:53.719645   40336 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/multinode-824789/config.json ...
	I0410 22:14:53.719850   40336 start.go:360] acquireMachinesLock for multinode-824789: {Name:mkcfcfa8b01b63726303cd37d33f01d452118635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0410 22:14:53.719914   40336 start.go:364] duration metric: took 38.55µs to acquireMachinesLock for "multinode-824789"
	I0410 22:14:53.719940   40336 start.go:96] Skipping create...Using existing machine configuration
	I0410 22:14:53.719956   40336 fix.go:54] fixHost starting: 
	I0410 22:14:53.720254   40336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:14:53.720340   40336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:14:53.734886   40336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44885
	I0410 22:14:53.735339   40336 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:14:53.735818   40336 main.go:141] libmachine: Using API Version  1
	I0410 22:14:53.735839   40336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:14:53.736189   40336 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:14:53.736371   40336 main.go:141] libmachine: (multinode-824789) Calling .DriverName
	I0410 22:14:53.736568   40336 main.go:141] libmachine: (multinode-824789) Calling .GetState
	I0410 22:14:53.738112   40336 fix.go:112] recreateIfNeeded on multinode-824789: state=Running err=<nil>
	W0410 22:14:53.738131   40336 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 22:14:53.740221   40336 out.go:177] * Updating the running kvm2 "multinode-824789" VM ...
	I0410 22:14:53.741533   40336 machine.go:94] provisionDockerMachine start ...
	I0410 22:14:53.741550   40336 main.go:141] libmachine: (multinode-824789) Calling .DriverName
	I0410 22:14:53.741746   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHHostname
	I0410 22:14:53.744122   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:14:53.744597   40336 main.go:141] libmachine: (multinode-824789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:10:8f", ip: ""} in network mk-multinode-824789: {Iface:virbr1 ExpiryTime:2024-04-10 23:09:54 +0000 UTC Type:0 Mac:52:54:00:af:10:8f Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-824789 Clientid:01:52:54:00:af:10:8f}
	I0410 22:14:53.744627   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined IP address 192.168.39.94 and MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:14:53.744766   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHPort
	I0410 22:14:53.744931   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHKeyPath
	I0410 22:14:53.745089   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHKeyPath
	I0410 22:14:53.745215   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHUsername
	I0410 22:14:53.745386   40336 main.go:141] libmachine: Using SSH client type: native
	I0410 22:14:53.745602   40336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0410 22:14:53.745615   40336 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 22:14:53.850406   40336 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-824789
	
	I0410 22:14:53.850429   40336 main.go:141] libmachine: (multinode-824789) Calling .GetMachineName
	I0410 22:14:53.850668   40336 buildroot.go:166] provisioning hostname "multinode-824789"
	I0410 22:14:53.850688   40336 main.go:141] libmachine: (multinode-824789) Calling .GetMachineName
	I0410 22:14:53.850865   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHHostname
	I0410 22:14:53.853722   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:14:53.854112   40336 main.go:141] libmachine: (multinode-824789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:10:8f", ip: ""} in network mk-multinode-824789: {Iface:virbr1 ExpiryTime:2024-04-10 23:09:54 +0000 UTC Type:0 Mac:52:54:00:af:10:8f Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-824789 Clientid:01:52:54:00:af:10:8f}
	I0410 22:14:53.854142   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined IP address 192.168.39.94 and MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:14:53.854292   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHPort
	I0410 22:14:53.854480   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHKeyPath
	I0410 22:14:53.854636   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHKeyPath
	I0410 22:14:53.854833   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHUsername
	I0410 22:14:53.855008   40336 main.go:141] libmachine: Using SSH client type: native
	I0410 22:14:53.855228   40336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0410 22:14:53.855248   40336 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-824789 && echo "multinode-824789" | sudo tee /etc/hostname
	I0410 22:14:53.973675   40336 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-824789
	
	I0410 22:14:53.973709   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHHostname
	I0410 22:14:53.976949   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:14:53.977427   40336 main.go:141] libmachine: (multinode-824789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:10:8f", ip: ""} in network mk-multinode-824789: {Iface:virbr1 ExpiryTime:2024-04-10 23:09:54 +0000 UTC Type:0 Mac:52:54:00:af:10:8f Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-824789 Clientid:01:52:54:00:af:10:8f}
	I0410 22:14:53.977469   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined IP address 192.168.39.94 and MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:14:53.977659   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHPort
	I0410 22:14:53.977864   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHKeyPath
	I0410 22:14:53.978041   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHKeyPath
	I0410 22:14:53.978214   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHUsername
	I0410 22:14:53.978423   40336 main.go:141] libmachine: Using SSH client type: native
	I0410 22:14:53.978620   40336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0410 22:14:53.978644   40336 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-824789' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-824789/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-824789' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:14:54.082096   40336 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:14:54.082126   40336 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:14:54.082159   40336 buildroot.go:174] setting up certificates
	I0410 22:14:54.082168   40336 provision.go:84] configureAuth start
	I0410 22:14:54.082176   40336 main.go:141] libmachine: (multinode-824789) Calling .GetMachineName
	I0410 22:14:54.082493   40336 main.go:141] libmachine: (multinode-824789) Calling .GetIP
	I0410 22:14:54.084770   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:14:54.085201   40336 main.go:141] libmachine: (multinode-824789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:10:8f", ip: ""} in network mk-multinode-824789: {Iface:virbr1 ExpiryTime:2024-04-10 23:09:54 +0000 UTC Type:0 Mac:52:54:00:af:10:8f Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-824789 Clientid:01:52:54:00:af:10:8f}
	I0410 22:14:54.085216   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined IP address 192.168.39.94 and MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:14:54.085389   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHHostname
	I0410 22:14:54.088049   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:14:54.088423   40336 main.go:141] libmachine: (multinode-824789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:10:8f", ip: ""} in network mk-multinode-824789: {Iface:virbr1 ExpiryTime:2024-04-10 23:09:54 +0000 UTC Type:0 Mac:52:54:00:af:10:8f Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-824789 Clientid:01:52:54:00:af:10:8f}
	I0410 22:14:54.088457   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined IP address 192.168.39.94 and MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:14:54.088586   40336 provision.go:143] copyHostCerts
	I0410 22:14:54.088628   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:14:54.088666   40336 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:14:54.088686   40336 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:14:54.088770   40336 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:14:54.088876   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:14:54.088902   40336 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:14:54.088912   40336 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:14:54.088955   40336 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:14:54.089030   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:14:54.089051   40336 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:14:54.089068   40336 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:14:54.089105   40336 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:14:54.089183   40336 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.multinode-824789 san=[127.0.0.1 192.168.39.94 localhost minikube multinode-824789]
	I0410 22:14:54.262659   40336 provision.go:177] copyRemoteCerts
	I0410 22:14:54.262718   40336 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:14:54.262740   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHHostname
	I0410 22:14:54.265429   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:14:54.265740   40336 main.go:141] libmachine: (multinode-824789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:10:8f", ip: ""} in network mk-multinode-824789: {Iface:virbr1 ExpiryTime:2024-04-10 23:09:54 +0000 UTC Type:0 Mac:52:54:00:af:10:8f Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-824789 Clientid:01:52:54:00:af:10:8f}
	I0410 22:14:54.265769   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined IP address 192.168.39.94 and MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:14:54.265990   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHPort
	I0410 22:14:54.266159   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHKeyPath
	I0410 22:14:54.266307   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHUsername
	I0410 22:14:54.266454   40336 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/multinode-824789/id_rsa Username:docker}
	I0410 22:14:54.350009   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0410 22:14:54.350086   40336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0410 22:14:54.377365   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0410 22:14:54.377473   40336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:14:54.405619   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0410 22:14:54.405697   40336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0410 22:14:54.432574   40336 provision.go:87] duration metric: took 350.395007ms to configureAuth
	I0410 22:14:54.432604   40336 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:14:54.432827   40336 config.go:182] Loaded profile config "multinode-824789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:14:54.432907   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHHostname
	I0410 22:14:54.435629   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:14:54.436034   40336 main.go:141] libmachine: (multinode-824789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:10:8f", ip: ""} in network mk-multinode-824789: {Iface:virbr1 ExpiryTime:2024-04-10 23:09:54 +0000 UTC Type:0 Mac:52:54:00:af:10:8f Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-824789 Clientid:01:52:54:00:af:10:8f}
	I0410 22:14:54.436061   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined IP address 192.168.39.94 and MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:14:54.436203   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHPort
	I0410 22:14:54.436435   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHKeyPath
	I0410 22:14:54.436610   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHKeyPath
	I0410 22:14:54.436733   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHUsername
	I0410 22:14:54.436949   40336 main.go:141] libmachine: Using SSH client type: native
	I0410 22:14:54.437150   40336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0410 22:14:54.437187   40336 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:16:25.258287   40336 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:16:25.258317   40336 machine.go:97] duration metric: took 1m31.516769992s to provisionDockerMachine
	I0410 22:16:25.258334   40336 start.go:293] postStartSetup for "multinode-824789" (driver="kvm2")
	I0410 22:16:25.258347   40336 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:16:25.258388   40336 main.go:141] libmachine: (multinode-824789) Calling .DriverName
	I0410 22:16:25.258738   40336 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:16:25.258773   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHHostname
	I0410 22:16:25.261516   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:16:25.261868   40336 main.go:141] libmachine: (multinode-824789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:10:8f", ip: ""} in network mk-multinode-824789: {Iface:virbr1 ExpiryTime:2024-04-10 23:09:54 +0000 UTC Type:0 Mac:52:54:00:af:10:8f Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-824789 Clientid:01:52:54:00:af:10:8f}
	I0410 22:16:25.261905   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined IP address 192.168.39.94 and MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:16:25.262090   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHPort
	I0410 22:16:25.262304   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHKeyPath
	I0410 22:16:25.262480   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHUsername
	I0410 22:16:25.262717   40336 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/multinode-824789/id_rsa Username:docker}
	I0410 22:16:25.345695   40336 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:16:25.350004   40336 command_runner.go:130] > NAME=Buildroot
	I0410 22:16:25.350024   40336 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0410 22:16:25.350028   40336 command_runner.go:130] > ID=buildroot
	I0410 22:16:25.350040   40336 command_runner.go:130] > VERSION_ID=2023.02.9
	I0410 22:16:25.350045   40336 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0410 22:16:25.350070   40336 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:16:25.350083   40336 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:16:25.350142   40336 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:16:25.350232   40336 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:16:25.350245   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> /etc/ssl/certs/130012.pem
	I0410 22:16:25.350335   40336 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:16:25.360546   40336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:16:25.386152   40336 start.go:296] duration metric: took 127.805194ms for postStartSetup
	I0410 22:16:25.386193   40336 fix.go:56] duration metric: took 1m31.666243462s for fixHost
	I0410 22:16:25.386211   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHHostname
	I0410 22:16:25.388883   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:16:25.389194   40336 main.go:141] libmachine: (multinode-824789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:10:8f", ip: ""} in network mk-multinode-824789: {Iface:virbr1 ExpiryTime:2024-04-10 23:09:54 +0000 UTC Type:0 Mac:52:54:00:af:10:8f Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-824789 Clientid:01:52:54:00:af:10:8f}
	I0410 22:16:25.389227   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined IP address 192.168.39.94 and MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:16:25.389358   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHPort
	I0410 22:16:25.389557   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHKeyPath
	I0410 22:16:25.389721   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHKeyPath
	I0410 22:16:25.389877   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHUsername
	I0410 22:16:25.390030   40336 main.go:141] libmachine: Using SSH client type: native
	I0410 22:16:25.390236   40336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0410 22:16:25.390248   40336 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 22:16:25.493429   40336 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712787385.473870702
	
	I0410 22:16:25.493475   40336 fix.go:216] guest clock: 1712787385.473870702
	I0410 22:16:25.493488   40336 fix.go:229] Guest: 2024-04-10 22:16:25.473870702 +0000 UTC Remote: 2024-04-10 22:16:25.386196463 +0000 UTC m=+91.804764720 (delta=87.674239ms)
	I0410 22:16:25.493551   40336 fix.go:200] guest clock delta is within tolerance: 87.674239ms
	I0410 22:16:25.493559   40336 start.go:83] releasing machines lock for "multinode-824789", held for 1m31.773630625s
	I0410 22:16:25.493592   40336 main.go:141] libmachine: (multinode-824789) Calling .DriverName
	I0410 22:16:25.493892   40336 main.go:141] libmachine: (multinode-824789) Calling .GetIP
	I0410 22:16:25.496612   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:16:25.496985   40336 main.go:141] libmachine: (multinode-824789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:10:8f", ip: ""} in network mk-multinode-824789: {Iface:virbr1 ExpiryTime:2024-04-10 23:09:54 +0000 UTC Type:0 Mac:52:54:00:af:10:8f Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-824789 Clientid:01:52:54:00:af:10:8f}
	I0410 22:16:25.497023   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined IP address 192.168.39.94 and MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:16:25.497153   40336 main.go:141] libmachine: (multinode-824789) Calling .DriverName
	I0410 22:16:25.497581   40336 main.go:141] libmachine: (multinode-824789) Calling .DriverName
	I0410 22:16:25.497777   40336 main.go:141] libmachine: (multinode-824789) Calling .DriverName
	I0410 22:16:25.497864   40336 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:16:25.497902   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHHostname
	I0410 22:16:25.498028   40336 ssh_runner.go:195] Run: cat /version.json
	I0410 22:16:25.498050   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHHostname
	I0410 22:16:25.500762   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:16:25.501134   40336 main.go:141] libmachine: (multinode-824789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:10:8f", ip: ""} in network mk-multinode-824789: {Iface:virbr1 ExpiryTime:2024-04-10 23:09:54 +0000 UTC Type:0 Mac:52:54:00:af:10:8f Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-824789 Clientid:01:52:54:00:af:10:8f}
	I0410 22:16:25.501160   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined IP address 192.168.39.94 and MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:16:25.501180   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:16:25.501297   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHPort
	I0410 22:16:25.501498   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHKeyPath
	I0410 22:16:25.501624   40336 main.go:141] libmachine: (multinode-824789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:10:8f", ip: ""} in network mk-multinode-824789: {Iface:virbr1 ExpiryTime:2024-04-10 23:09:54 +0000 UTC Type:0 Mac:52:54:00:af:10:8f Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-824789 Clientid:01:52:54:00:af:10:8f}
	I0410 22:16:25.501636   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHUsername
	I0410 22:16:25.501645   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined IP address 192.168.39.94 and MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:16:25.501801   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHPort
	I0410 22:16:25.501809   40336 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/multinode-824789/id_rsa Username:docker}
	I0410 22:16:25.501982   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHKeyPath
	I0410 22:16:25.502137   40336 main.go:141] libmachine: (multinode-824789) Calling .GetSSHUsername
	I0410 22:16:25.502319   40336 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/multinode-824789/id_rsa Username:docker}
	I0410 22:16:25.609848   40336 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0410 22:16:25.610640   40336 command_runner.go:130] > {"iso_version": "v1.33.0-1712743565-18610", "kicbase_version": "v0.0.43-1712593525-18585", "minikube_version": "v1.33.0-beta.0", "commit": "c0a429c696190f9570e438712701fdb5e36a248a"}
	I0410 22:16:25.610778   40336 ssh_runner.go:195] Run: systemctl --version
	I0410 22:16:25.616958   40336 command_runner.go:130] > systemd 252 (252)
	I0410 22:16:25.617006   40336 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0410 22:16:25.617204   40336 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:16:25.784844   40336 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0410 22:16:25.792819   40336 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0410 22:16:25.793319   40336 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:16:25.793384   40336 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:16:25.803494   40336 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0410 22:16:25.803518   40336 start.go:494] detecting cgroup driver to use...
	I0410 22:16:25.803590   40336 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:16:25.821232   40336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:16:25.836293   40336 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:16:25.836369   40336 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:16:25.850632   40336 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:16:25.865087   40336 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:16:26.014680   40336 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:16:26.163698   40336 docker.go:233] disabling docker service ...
	I0410 22:16:26.163757   40336 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:16:26.180916   40336 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:16:26.195108   40336 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:16:26.346942   40336 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:16:26.498431   40336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:16:26.514013   40336 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:16:26.534278   40336 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0410 22:16:26.534326   40336 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0410 22:16:26.534377   40336 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:16:26.545430   40336 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:16:26.545501   40336 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:16:26.555868   40336 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:16:26.566665   40336 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:16:26.577249   40336 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:16:26.588157   40336 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:16:26.598959   40336 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:16:26.611855   40336 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:16:26.624189   40336 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:16:26.634648   40336 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0410 22:16:26.634708   40336 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:16:26.645184   40336 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:16:26.790860   40336 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:16:27.040333   40336 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 22:16:27.040423   40336 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 22:16:27.045863   40336 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0410 22:16:27.045884   40336 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0410 22:16:27.045917   40336 command_runner.go:130] > Device: 0,22	Inode: 1323        Links: 1
	I0410 22:16:27.045930   40336 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0410 22:16:27.045935   40336 command_runner.go:130] > Access: 2024-04-10 22:16:26.913249186 +0000
	I0410 22:16:27.045943   40336 command_runner.go:130] > Modify: 2024-04-10 22:16:26.913249186 +0000
	I0410 22:16:27.045950   40336 command_runner.go:130] > Change: 2024-04-10 22:16:26.913249186 +0000
	I0410 22:16:27.045957   40336 command_runner.go:130] >  Birth: -
	I0410 22:16:27.046032   40336 start.go:562] Will wait 60s for crictl version
	I0410 22:16:27.046111   40336 ssh_runner.go:195] Run: which crictl
	I0410 22:16:27.049899   40336 command_runner.go:130] > /usr/bin/crictl
	I0410 22:16:27.050043   40336 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 22:16:27.090666   40336 command_runner.go:130] > Version:  0.1.0
	I0410 22:16:27.090692   40336 command_runner.go:130] > RuntimeName:  cri-o
	I0410 22:16:27.090696   40336 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0410 22:16:27.090703   40336 command_runner.go:130] > RuntimeApiVersion:  v1
	I0410 22:16:27.091682   40336 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 22:16:27.091743   40336 ssh_runner.go:195] Run: crio --version
	I0410 22:16:27.127223   40336 command_runner.go:130] > crio version 1.29.1
	I0410 22:16:27.127251   40336 command_runner.go:130] > Version:        1.29.1
	I0410 22:16:27.127259   40336 command_runner.go:130] > GitCommit:      unknown
	I0410 22:16:27.127264   40336 command_runner.go:130] > GitCommitDate:  unknown
	I0410 22:16:27.127269   40336 command_runner.go:130] > GitTreeState:   clean
	I0410 22:16:27.127276   40336 command_runner.go:130] > BuildDate:      2024-04-10T15:40:24Z
	I0410 22:16:27.127282   40336 command_runner.go:130] > GoVersion:      go1.21.6
	I0410 22:16:27.127288   40336 command_runner.go:130] > Compiler:       gc
	I0410 22:16:27.127295   40336 command_runner.go:130] > Platform:       linux/amd64
	I0410 22:16:27.127301   40336 command_runner.go:130] > Linkmode:       dynamic
	I0410 22:16:27.127308   40336 command_runner.go:130] > BuildTags:      
	I0410 22:16:27.127314   40336 command_runner.go:130] >   containers_image_ostree_stub
	I0410 22:16:27.127320   40336 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0410 22:16:27.127338   40336 command_runner.go:130] >   btrfs_noversion
	I0410 22:16:27.127349   40336 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0410 22:16:27.127357   40336 command_runner.go:130] >   libdm_no_deferred_remove
	I0410 22:16:27.127362   40336 command_runner.go:130] >   seccomp
	I0410 22:16:27.127372   40336 command_runner.go:130] > LDFlags:          unknown
	I0410 22:16:27.127379   40336 command_runner.go:130] > SeccompEnabled:   true
	I0410 22:16:27.127386   40336 command_runner.go:130] > AppArmorEnabled:  false
	I0410 22:16:27.127477   40336 ssh_runner.go:195] Run: crio --version
	I0410 22:16:27.155799   40336 command_runner.go:130] > crio version 1.29.1
	I0410 22:16:27.155821   40336 command_runner.go:130] > Version:        1.29.1
	I0410 22:16:27.155826   40336 command_runner.go:130] > GitCommit:      unknown
	I0410 22:16:27.155830   40336 command_runner.go:130] > GitCommitDate:  unknown
	I0410 22:16:27.155851   40336 command_runner.go:130] > GitTreeState:   clean
	I0410 22:16:27.155857   40336 command_runner.go:130] > BuildDate:      2024-04-10T15:40:24Z
	I0410 22:16:27.155861   40336 command_runner.go:130] > GoVersion:      go1.21.6
	I0410 22:16:27.155865   40336 command_runner.go:130] > Compiler:       gc
	I0410 22:16:27.155869   40336 command_runner.go:130] > Platform:       linux/amd64
	I0410 22:16:27.155873   40336 command_runner.go:130] > Linkmode:       dynamic
	I0410 22:16:27.155878   40336 command_runner.go:130] > BuildTags:      
	I0410 22:16:27.155882   40336 command_runner.go:130] >   containers_image_ostree_stub
	I0410 22:16:27.155886   40336 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0410 22:16:27.155890   40336 command_runner.go:130] >   btrfs_noversion
	I0410 22:16:27.155894   40336 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0410 22:16:27.155898   40336 command_runner.go:130] >   libdm_no_deferred_remove
	I0410 22:16:27.155905   40336 command_runner.go:130] >   seccomp
	I0410 22:16:27.155909   40336 command_runner.go:130] > LDFlags:          unknown
	I0410 22:16:27.155914   40336 command_runner.go:130] > SeccompEnabled:   true
	I0410 22:16:27.155918   40336 command_runner.go:130] > AppArmorEnabled:  false
	I0410 22:16:27.160443   40336 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0410 22:16:27.162094   40336 main.go:141] libmachine: (multinode-824789) Calling .GetIP
	I0410 22:16:27.164700   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:16:27.165017   40336 main.go:141] libmachine: (multinode-824789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:10:8f", ip: ""} in network mk-multinode-824789: {Iface:virbr1 ExpiryTime:2024-04-10 23:09:54 +0000 UTC Type:0 Mac:52:54:00:af:10:8f Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-824789 Clientid:01:52:54:00:af:10:8f}
	I0410 22:16:27.165049   40336 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined IP address 192.168.39.94 and MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:16:27.165241   40336 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0410 22:16:27.169579   40336 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0410 22:16:27.169759   40336 kubeadm.go:877] updating cluster {Name:multinode-824789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
29.3 ClusterName:multinode-824789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.85 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.224 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 22:16:27.169890   40336 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 22:16:27.169930   40336 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:16:27.214772   40336 command_runner.go:130] > {
	I0410 22:16:27.214799   40336 command_runner.go:130] >   "images": [
	I0410 22:16:27.214804   40336 command_runner.go:130] >     {
	I0410 22:16:27.214816   40336 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0410 22:16:27.214822   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.214833   40336 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0410 22:16:27.214838   40336 command_runner.go:130] >       ],
	I0410 22:16:27.214843   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.214856   40336 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0410 22:16:27.214874   40336 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0410 22:16:27.214880   40336 command_runner.go:130] >       ],
	I0410 22:16:27.214886   40336 command_runner.go:130] >       "size": "65291810",
	I0410 22:16:27.214892   40336 command_runner.go:130] >       "uid": null,
	I0410 22:16:27.214899   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.214916   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.214923   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.214927   40336 command_runner.go:130] >     },
	I0410 22:16:27.214930   40336 command_runner.go:130] >     {
	I0410 22:16:27.214936   40336 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0410 22:16:27.214951   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.214962   40336 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0410 22:16:27.214969   40336 command_runner.go:130] >       ],
	I0410 22:16:27.214973   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.214980   40336 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0410 22:16:27.214989   40336 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0410 22:16:27.214992   40336 command_runner.go:130] >       ],
	I0410 22:16:27.214997   40336 command_runner.go:130] >       "size": "1363676",
	I0410 22:16:27.215003   40336 command_runner.go:130] >       "uid": null,
	I0410 22:16:27.215010   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.215016   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.215020   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.215024   40336 command_runner.go:130] >     },
	I0410 22:16:27.215028   40336 command_runner.go:130] >     {
	I0410 22:16:27.215033   40336 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0410 22:16:27.215038   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.215043   40336 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0410 22:16:27.215049   40336 command_runner.go:130] >       ],
	I0410 22:16:27.215053   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.215063   40336 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0410 22:16:27.215073   40336 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0410 22:16:27.215078   40336 command_runner.go:130] >       ],
	I0410 22:16:27.215082   40336 command_runner.go:130] >       "size": "31470524",
	I0410 22:16:27.215087   40336 command_runner.go:130] >       "uid": null,
	I0410 22:16:27.215093   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.215097   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.215101   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.215104   40336 command_runner.go:130] >     },
	I0410 22:16:27.215107   40336 command_runner.go:130] >     {
	I0410 22:16:27.215113   40336 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0410 22:16:27.215119   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.215123   40336 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0410 22:16:27.215129   40336 command_runner.go:130] >       ],
	I0410 22:16:27.215133   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.215140   40336 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0410 22:16:27.215176   40336 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0410 22:16:27.215190   40336 command_runner.go:130] >       ],
	I0410 22:16:27.215194   40336 command_runner.go:130] >       "size": "61245718",
	I0410 22:16:27.215198   40336 command_runner.go:130] >       "uid": null,
	I0410 22:16:27.215204   40336 command_runner.go:130] >       "username": "nonroot",
	I0410 22:16:27.215208   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.215216   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.215220   40336 command_runner.go:130] >     },
	I0410 22:16:27.215223   40336 command_runner.go:130] >     {
	I0410 22:16:27.215230   40336 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0410 22:16:27.215236   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.215241   40336 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0410 22:16:27.215247   40336 command_runner.go:130] >       ],
	I0410 22:16:27.215252   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.215259   40336 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0410 22:16:27.215268   40336 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0410 22:16:27.215271   40336 command_runner.go:130] >       ],
	I0410 22:16:27.215276   40336 command_runner.go:130] >       "size": "150779692",
	I0410 22:16:27.215282   40336 command_runner.go:130] >       "uid": {
	I0410 22:16:27.215286   40336 command_runner.go:130] >         "value": "0"
	I0410 22:16:27.215289   40336 command_runner.go:130] >       },
	I0410 22:16:27.215293   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.215297   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.215301   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.215307   40336 command_runner.go:130] >     },
	I0410 22:16:27.215310   40336 command_runner.go:130] >     {
	I0410 22:16:27.215316   40336 command_runner.go:130] >       "id": "39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533",
	I0410 22:16:27.215320   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.215326   40336 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.29.3"
	I0410 22:16:27.215333   40336 command_runner.go:130] >       ],
	I0410 22:16:27.215337   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.215344   40336 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322",
	I0410 22:16:27.215354   40336 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"
	I0410 22:16:27.215357   40336 command_runner.go:130] >       ],
	I0410 22:16:27.215361   40336 command_runner.go:130] >       "size": "128508878",
	I0410 22:16:27.215367   40336 command_runner.go:130] >       "uid": {
	I0410 22:16:27.215371   40336 command_runner.go:130] >         "value": "0"
	I0410 22:16:27.215379   40336 command_runner.go:130] >       },
	I0410 22:16:27.215385   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.215389   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.215395   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.215398   40336 command_runner.go:130] >     },
	I0410 22:16:27.215401   40336 command_runner.go:130] >     {
	I0410 22:16:27.215407   40336 command_runner.go:130] >       "id": "6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3",
	I0410 22:16:27.215412   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.215417   40336 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.29.3"
	I0410 22:16:27.215420   40336 command_runner.go:130] >       ],
	I0410 22:16:27.215424   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.215432   40336 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606",
	I0410 22:16:27.215442   40336 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"
	I0410 22:16:27.215445   40336 command_runner.go:130] >       ],
	I0410 22:16:27.215449   40336 command_runner.go:130] >       "size": "123142962",
	I0410 22:16:27.215452   40336 command_runner.go:130] >       "uid": {
	I0410 22:16:27.215458   40336 command_runner.go:130] >         "value": "0"
	I0410 22:16:27.215463   40336 command_runner.go:130] >       },
	I0410 22:16:27.215467   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.215473   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.215477   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.215480   40336 command_runner.go:130] >     },
	I0410 22:16:27.215483   40336 command_runner.go:130] >     {
	I0410 22:16:27.215489   40336 command_runner.go:130] >       "id": "a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392",
	I0410 22:16:27.215495   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.215500   40336 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.29.3"
	I0410 22:16:27.215506   40336 command_runner.go:130] >       ],
	I0410 22:16:27.215510   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.215530   40336 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d",
	I0410 22:16:27.215540   40336 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"
	I0410 22:16:27.215543   40336 command_runner.go:130] >       ],
	I0410 22:16:27.215547   40336 command_runner.go:130] >       "size": "83634073",
	I0410 22:16:27.215553   40336 command_runner.go:130] >       "uid": null,
	I0410 22:16:27.215556   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.215560   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.215564   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.215571   40336 command_runner.go:130] >     },
	I0410 22:16:27.215574   40336 command_runner.go:130] >     {
	I0410 22:16:27.215580   40336 command_runner.go:130] >       "id": "8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b",
	I0410 22:16:27.215584   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.215588   40336 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.29.3"
	I0410 22:16:27.215591   40336 command_runner.go:130] >       ],
	I0410 22:16:27.215595   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.215602   40336 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a",
	I0410 22:16:27.215609   40336 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"
	I0410 22:16:27.215616   40336 command_runner.go:130] >       ],
	I0410 22:16:27.215620   40336 command_runner.go:130] >       "size": "60724018",
	I0410 22:16:27.215623   40336 command_runner.go:130] >       "uid": {
	I0410 22:16:27.215627   40336 command_runner.go:130] >         "value": "0"
	I0410 22:16:27.215630   40336 command_runner.go:130] >       },
	I0410 22:16:27.215635   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.215639   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.215643   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.215646   40336 command_runner.go:130] >     },
	I0410 22:16:27.215650   40336 command_runner.go:130] >     {
	I0410 22:16:27.215656   40336 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0410 22:16:27.215662   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.215666   40336 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0410 22:16:27.215670   40336 command_runner.go:130] >       ],
	I0410 22:16:27.215674   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.215681   40336 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0410 22:16:27.215690   40336 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0410 22:16:27.215695   40336 command_runner.go:130] >       ],
	I0410 22:16:27.215699   40336 command_runner.go:130] >       "size": "750414",
	I0410 22:16:27.215705   40336 command_runner.go:130] >       "uid": {
	I0410 22:16:27.215709   40336 command_runner.go:130] >         "value": "65535"
	I0410 22:16:27.215712   40336 command_runner.go:130] >       },
	I0410 22:16:27.215718   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.215722   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.215728   40336 command_runner.go:130] >       "pinned": true
	I0410 22:16:27.215731   40336 command_runner.go:130] >     }
	I0410 22:16:27.215734   40336 command_runner.go:130] >   ]
	I0410 22:16:27.215742   40336 command_runner.go:130] > }
	I0410 22:16:27.215895   40336 crio.go:514] all images are preloaded for cri-o runtime.
	I0410 22:16:27.215905   40336 crio.go:433] Images already preloaded, skipping extraction
	I0410 22:16:27.215947   40336 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:16:27.255788   40336 command_runner.go:130] > {
	I0410 22:16:27.255807   40336 command_runner.go:130] >   "images": [
	I0410 22:16:27.255811   40336 command_runner.go:130] >     {
	I0410 22:16:27.255819   40336 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0410 22:16:27.255824   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.255833   40336 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0410 22:16:27.255838   40336 command_runner.go:130] >       ],
	I0410 22:16:27.255845   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.255858   40336 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0410 22:16:27.255868   40336 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0410 22:16:27.255873   40336 command_runner.go:130] >       ],
	I0410 22:16:27.255880   40336 command_runner.go:130] >       "size": "65291810",
	I0410 22:16:27.255890   40336 command_runner.go:130] >       "uid": null,
	I0410 22:16:27.255895   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.255923   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.255936   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.255941   40336 command_runner.go:130] >     },
	I0410 22:16:27.255947   40336 command_runner.go:130] >     {
	I0410 22:16:27.255960   40336 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0410 22:16:27.255965   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.255971   40336 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0410 22:16:27.255975   40336 command_runner.go:130] >       ],
	I0410 22:16:27.255979   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.255986   40336 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0410 22:16:27.255993   40336 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0410 22:16:27.255996   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256000   40336 command_runner.go:130] >       "size": "1363676",
	I0410 22:16:27.256004   40336 command_runner.go:130] >       "uid": null,
	I0410 22:16:27.256011   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.256014   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.256018   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.256023   40336 command_runner.go:130] >     },
	I0410 22:16:27.256027   40336 command_runner.go:130] >     {
	I0410 22:16:27.256033   40336 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0410 22:16:27.256038   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.256043   40336 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0410 22:16:27.256047   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256051   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.256059   40336 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0410 22:16:27.256067   40336 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0410 22:16:27.256087   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256099   40336 command_runner.go:130] >       "size": "31470524",
	I0410 22:16:27.256104   40336 command_runner.go:130] >       "uid": null,
	I0410 22:16:27.256108   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.256111   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.256115   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.256121   40336 command_runner.go:130] >     },
	I0410 22:16:27.256125   40336 command_runner.go:130] >     {
	I0410 22:16:27.256130   40336 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0410 22:16:27.256134   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.256139   40336 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0410 22:16:27.256145   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256149   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.256155   40336 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0410 22:16:27.256167   40336 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0410 22:16:27.256171   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256175   40336 command_runner.go:130] >       "size": "61245718",
	I0410 22:16:27.256179   40336 command_runner.go:130] >       "uid": null,
	I0410 22:16:27.256183   40336 command_runner.go:130] >       "username": "nonroot",
	I0410 22:16:27.256191   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.256195   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.256198   40336 command_runner.go:130] >     },
	I0410 22:16:27.256201   40336 command_runner.go:130] >     {
	I0410 22:16:27.256207   40336 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0410 22:16:27.256212   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.256216   40336 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0410 22:16:27.256219   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256223   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.256234   40336 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0410 22:16:27.256240   40336 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0410 22:16:27.256246   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256250   40336 command_runner.go:130] >       "size": "150779692",
	I0410 22:16:27.256254   40336 command_runner.go:130] >       "uid": {
	I0410 22:16:27.256257   40336 command_runner.go:130] >         "value": "0"
	I0410 22:16:27.256261   40336 command_runner.go:130] >       },
	I0410 22:16:27.256265   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.256270   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.256274   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.256277   40336 command_runner.go:130] >     },
	I0410 22:16:27.256280   40336 command_runner.go:130] >     {
	I0410 22:16:27.256288   40336 command_runner.go:130] >       "id": "39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533",
	I0410 22:16:27.256292   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.256297   40336 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.29.3"
	I0410 22:16:27.256300   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256304   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.256311   40336 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322",
	I0410 22:16:27.256318   40336 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"
	I0410 22:16:27.256324   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256328   40336 command_runner.go:130] >       "size": "128508878",
	I0410 22:16:27.256331   40336 command_runner.go:130] >       "uid": {
	I0410 22:16:27.256337   40336 command_runner.go:130] >         "value": "0"
	I0410 22:16:27.256340   40336 command_runner.go:130] >       },
	I0410 22:16:27.256344   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.256348   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.256352   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.256355   40336 command_runner.go:130] >     },
	I0410 22:16:27.256358   40336 command_runner.go:130] >     {
	I0410 22:16:27.256364   40336 command_runner.go:130] >       "id": "6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3",
	I0410 22:16:27.256368   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.256373   40336 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.29.3"
	I0410 22:16:27.256379   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256383   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.256393   40336 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606",
	I0410 22:16:27.256414   40336 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"
	I0410 22:16:27.256423   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256427   40336 command_runner.go:130] >       "size": "123142962",
	I0410 22:16:27.256431   40336 command_runner.go:130] >       "uid": {
	I0410 22:16:27.256436   40336 command_runner.go:130] >         "value": "0"
	I0410 22:16:27.256441   40336 command_runner.go:130] >       },
	I0410 22:16:27.256445   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.256449   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.256460   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.256466   40336 command_runner.go:130] >     },
	I0410 22:16:27.256469   40336 command_runner.go:130] >     {
	I0410 22:16:27.256475   40336 command_runner.go:130] >       "id": "a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392",
	I0410 22:16:27.256481   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.256486   40336 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.29.3"
	I0410 22:16:27.256492   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256496   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.256510   40336 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d",
	I0410 22:16:27.256520   40336 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"
	I0410 22:16:27.256523   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256527   40336 command_runner.go:130] >       "size": "83634073",
	I0410 22:16:27.256531   40336 command_runner.go:130] >       "uid": null,
	I0410 22:16:27.256535   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.256539   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.256543   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.256546   40336 command_runner.go:130] >     },
	I0410 22:16:27.256549   40336 command_runner.go:130] >     {
	I0410 22:16:27.256555   40336 command_runner.go:130] >       "id": "8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b",
	I0410 22:16:27.256561   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.256567   40336 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.29.3"
	I0410 22:16:27.256572   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256577   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.256584   40336 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a",
	I0410 22:16:27.256593   40336 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"
	I0410 22:16:27.256597   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256601   40336 command_runner.go:130] >       "size": "60724018",
	I0410 22:16:27.256607   40336 command_runner.go:130] >       "uid": {
	I0410 22:16:27.256610   40336 command_runner.go:130] >         "value": "0"
	I0410 22:16:27.256618   40336 command_runner.go:130] >       },
	I0410 22:16:27.256624   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.256628   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.256634   40336 command_runner.go:130] >       "pinned": false
	I0410 22:16:27.256637   40336 command_runner.go:130] >     },
	I0410 22:16:27.256640   40336 command_runner.go:130] >     {
	I0410 22:16:27.256646   40336 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0410 22:16:27.256652   40336 command_runner.go:130] >       "repoTags": [
	I0410 22:16:27.256657   40336 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0410 22:16:27.256661   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256665   40336 command_runner.go:130] >       "repoDigests": [
	I0410 22:16:27.256674   40336 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0410 22:16:27.256683   40336 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0410 22:16:27.256689   40336 command_runner.go:130] >       ],
	I0410 22:16:27.256693   40336 command_runner.go:130] >       "size": "750414",
	I0410 22:16:27.256696   40336 command_runner.go:130] >       "uid": {
	I0410 22:16:27.256700   40336 command_runner.go:130] >         "value": "65535"
	I0410 22:16:27.256704   40336 command_runner.go:130] >       },
	I0410 22:16:27.256708   40336 command_runner.go:130] >       "username": "",
	I0410 22:16:27.256711   40336 command_runner.go:130] >       "spec": null,
	I0410 22:16:27.256715   40336 command_runner.go:130] >       "pinned": true
	I0410 22:16:27.256718   40336 command_runner.go:130] >     }
	I0410 22:16:27.256721   40336 command_runner.go:130] >   ]
	I0410 22:16:27.256726   40336 command_runner.go:130] > }
	I0410 22:16:27.256824   40336 crio.go:514] all images are preloaded for cri-o runtime.
	I0410 22:16:27.256834   40336 cache_images.go:84] Images are preloaded, skipping loading
	I0410 22:16:27.256843   40336 kubeadm.go:928] updating node { 192.168.39.94 8443 v1.29.3 crio true true} ...
	I0410 22:16:27.256938   40336 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-824789 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-824789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 22:16:27.256997   40336 ssh_runner.go:195] Run: crio config
	I0410 22:16:27.309416   40336 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0410 22:16:27.309444   40336 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0410 22:16:27.309454   40336 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0410 22:16:27.309459   40336 command_runner.go:130] > #
	I0410 22:16:27.309469   40336 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0410 22:16:27.309478   40336 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0410 22:16:27.309489   40336 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0410 22:16:27.309499   40336 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0410 22:16:27.309512   40336 command_runner.go:130] > # reload'.
	I0410 22:16:27.309521   40336 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0410 22:16:27.309538   40336 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0410 22:16:27.309549   40336 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0410 22:16:27.309563   40336 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0410 22:16:27.309571   40336 command_runner.go:130] > [crio]
	I0410 22:16:27.309580   40336 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0410 22:16:27.309587   40336 command_runner.go:130] > # containers images, in this directory.
	I0410 22:16:27.309595   40336 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0410 22:16:27.309611   40336 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0410 22:16:27.309659   40336 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0410 22:16:27.309680   40336 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0410 22:16:27.309892   40336 command_runner.go:130] > # imagestore = ""
	I0410 22:16:27.309909   40336 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0410 22:16:27.309919   40336 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0410 22:16:27.310020   40336 command_runner.go:130] > storage_driver = "overlay"
	I0410 22:16:27.310037   40336 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0410 22:16:27.310047   40336 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0410 22:16:27.310056   40336 command_runner.go:130] > storage_option = [
	I0410 22:16:27.310194   40336 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0410 22:16:27.310320   40336 command_runner.go:130] > ]
	I0410 22:16:27.310340   40336 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0410 22:16:27.310351   40336 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0410 22:16:27.310604   40336 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0410 22:16:27.310616   40336 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0410 22:16:27.310622   40336 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0410 22:16:27.310627   40336 command_runner.go:130] > # always happen on a node reboot
	I0410 22:16:27.310904   40336 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0410 22:16:27.310932   40336 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0410 22:16:27.310944   40336 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0410 22:16:27.310953   40336 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0410 22:16:27.311044   40336 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0410 22:16:27.311057   40336 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0410 22:16:27.311065   40336 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0410 22:16:27.311384   40336 command_runner.go:130] > # internal_wipe = true
	I0410 22:16:27.311396   40336 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0410 22:16:27.311401   40336 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0410 22:16:27.311687   40336 command_runner.go:130] > # internal_repair = false
	I0410 22:16:27.311704   40336 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0410 22:16:27.311714   40336 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0410 22:16:27.311725   40336 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0410 22:16:27.311980   40336 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0410 22:16:27.311991   40336 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0410 22:16:27.312000   40336 command_runner.go:130] > [crio.api]
	I0410 22:16:27.312006   40336 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0410 22:16:27.312631   40336 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0410 22:16:27.312646   40336 command_runner.go:130] > # IP address on which the stream server will listen.
	I0410 22:16:27.312651   40336 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0410 22:16:27.312657   40336 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0410 22:16:27.312662   40336 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0410 22:16:27.312666   40336 command_runner.go:130] > # stream_port = "0"
	I0410 22:16:27.312672   40336 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0410 22:16:27.312676   40336 command_runner.go:130] > # stream_enable_tls = false
	I0410 22:16:27.312684   40336 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0410 22:16:27.312688   40336 command_runner.go:130] > # stream_idle_timeout = ""
	I0410 22:16:27.312700   40336 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0410 22:16:27.312707   40336 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0410 22:16:27.312710   40336 command_runner.go:130] > # minutes.
	I0410 22:16:27.312715   40336 command_runner.go:130] > # stream_tls_cert = ""
	I0410 22:16:27.312720   40336 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0410 22:16:27.312726   40336 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0410 22:16:27.312733   40336 command_runner.go:130] > # stream_tls_key = ""
	I0410 22:16:27.312738   40336 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0410 22:16:27.312744   40336 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0410 22:16:27.312761   40336 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0410 22:16:27.312772   40336 command_runner.go:130] > # stream_tls_ca = ""
	I0410 22:16:27.312783   40336 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0410 22:16:27.312791   40336 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0410 22:16:27.312803   40336 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0410 22:16:27.312811   40336 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0410 22:16:27.312817   40336 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0410 22:16:27.312823   40336 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0410 22:16:27.312830   40336 command_runner.go:130] > [crio.runtime]
	I0410 22:16:27.312840   40336 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0410 22:16:27.312852   40336 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0410 22:16:27.312861   40336 command_runner.go:130] > # "nofile=1024:2048"
	I0410 22:16:27.312871   40336 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0410 22:16:27.312878   40336 command_runner.go:130] > # default_ulimits = [
	I0410 22:16:27.312881   40336 command_runner.go:130] > # ]
	I0410 22:16:27.312887   40336 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0410 22:16:27.312895   40336 command_runner.go:130] > # no_pivot = false
	I0410 22:16:27.312904   40336 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0410 22:16:27.312918   40336 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0410 22:16:27.312930   40336 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0410 22:16:27.312943   40336 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0410 22:16:27.312951   40336 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0410 22:16:27.312959   40336 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0410 22:16:27.312967   40336 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0410 22:16:27.312971   40336 command_runner.go:130] > # Cgroup setting for conmon
	I0410 22:16:27.312980   40336 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0410 22:16:27.312987   40336 command_runner.go:130] > conmon_cgroup = "pod"
	I0410 22:16:27.313000   40336 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0410 22:16:27.313014   40336 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0410 22:16:27.313029   40336 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0410 22:16:27.313038   40336 command_runner.go:130] > conmon_env = [
	I0410 22:16:27.313047   40336 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0410 22:16:27.313053   40336 command_runner.go:130] > ]
	I0410 22:16:27.313058   40336 command_runner.go:130] > # Additional environment variables to set for all the
	I0410 22:16:27.313063   40336 command_runner.go:130] > # containers. These are overridden if set in the
	I0410 22:16:27.313075   40336 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0410 22:16:27.313081   40336 command_runner.go:130] > # default_env = [
	I0410 22:16:27.313090   40336 command_runner.go:130] > # ]
	I0410 22:16:27.313099   40336 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0410 22:16:27.313116   40336 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0410 22:16:27.313125   40336 command_runner.go:130] > # selinux = false
	I0410 22:16:27.313136   40336 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0410 22:16:27.313149   40336 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0410 22:16:27.313161   40336 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0410 22:16:27.313169   40336 command_runner.go:130] > # seccomp_profile = ""
	I0410 22:16:27.313176   40336 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0410 22:16:27.313188   40336 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0410 22:16:27.313201   40336 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0410 22:16:27.313210   40336 command_runner.go:130] > # which might increase security.
	I0410 22:16:27.313221   40336 command_runner.go:130] > # This option is currently deprecated,
	I0410 22:16:27.313230   40336 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0410 22:16:27.313241   40336 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0410 22:16:27.313255   40336 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0410 22:16:27.313266   40336 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0410 22:16:27.313276   40336 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0410 22:16:27.313289   40336 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0410 22:16:27.313302   40336 command_runner.go:130] > # This option supports live configuration reload.
	I0410 22:16:27.313313   40336 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0410 22:16:27.313326   40336 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0410 22:16:27.313332   40336 command_runner.go:130] > # the cgroup blockio controller.
	I0410 22:16:27.313342   40336 command_runner.go:130] > # blockio_config_file = ""
	I0410 22:16:27.313351   40336 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0410 22:16:27.313359   40336 command_runner.go:130] > # blockio parameters.
	I0410 22:16:27.313366   40336 command_runner.go:130] > # blockio_reload = false
	I0410 22:16:27.313379   40336 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0410 22:16:27.313389   40336 command_runner.go:130] > # irqbalance daemon.
	I0410 22:16:27.313400   40336 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0410 22:16:27.313413   40336 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0410 22:16:27.313427   40336 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0410 22:16:27.313441   40336 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0410 22:16:27.313458   40336 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0410 22:16:27.313469   40336 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0410 22:16:27.313479   40336 command_runner.go:130] > # This option supports live configuration reload.
	I0410 22:16:27.313489   40336 command_runner.go:130] > # rdt_config_file = ""
	I0410 22:16:27.313501   40336 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0410 22:16:27.313513   40336 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0410 22:16:27.313560   40336 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0410 22:16:27.313572   40336 command_runner.go:130] > # separate_pull_cgroup = ""
	I0410 22:16:27.313583   40336 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0410 22:16:27.313593   40336 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0410 22:16:27.313603   40336 command_runner.go:130] > # will be added.
	I0410 22:16:27.313610   40336 command_runner.go:130] > # default_capabilities = [
	I0410 22:16:27.313617   40336 command_runner.go:130] > # 	"CHOWN",
	I0410 22:16:27.313623   40336 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0410 22:16:27.313630   40336 command_runner.go:130] > # 	"FSETID",
	I0410 22:16:27.313638   40336 command_runner.go:130] > # 	"FOWNER",
	I0410 22:16:27.313643   40336 command_runner.go:130] > # 	"SETGID",
	I0410 22:16:27.313652   40336 command_runner.go:130] > # 	"SETUID",
	I0410 22:16:27.313658   40336 command_runner.go:130] > # 	"SETPCAP",
	I0410 22:16:27.313668   40336 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0410 22:16:27.313678   40336 command_runner.go:130] > # 	"KILL",
	I0410 22:16:27.313682   40336 command_runner.go:130] > # ]
	I0410 22:16:27.313696   40336 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0410 22:16:27.313711   40336 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0410 22:16:27.313720   40336 command_runner.go:130] > # add_inheritable_capabilities = false
	I0410 22:16:27.313729   40336 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0410 22:16:27.313741   40336 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0410 22:16:27.313748   40336 command_runner.go:130] > default_sysctls = [
	I0410 22:16:27.313763   40336 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0410 22:16:27.313769   40336 command_runner.go:130] > ]
	I0410 22:16:27.313777   40336 command_runner.go:130] > # List of devices on the host that a
	I0410 22:16:27.313791   40336 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0410 22:16:27.313800   40336 command_runner.go:130] > # allowed_devices = [
	I0410 22:16:27.313807   40336 command_runner.go:130] > # 	"/dev/fuse",
	I0410 22:16:27.313816   40336 command_runner.go:130] > # ]
	I0410 22:16:27.313824   40336 command_runner.go:130] > # List of additional devices. specified as
	I0410 22:16:27.313836   40336 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0410 22:16:27.313848   40336 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0410 22:16:27.313858   40336 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0410 22:16:27.313868   40336 command_runner.go:130] > # additional_devices = [
	I0410 22:16:27.313873   40336 command_runner.go:130] > # ]
	I0410 22:16:27.313884   40336 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0410 22:16:27.313894   40336 command_runner.go:130] > # cdi_spec_dirs = [
	I0410 22:16:27.313904   40336 command_runner.go:130] > # 	"/etc/cdi",
	I0410 22:16:27.313915   40336 command_runner.go:130] > # 	"/var/run/cdi",
	I0410 22:16:27.313921   40336 command_runner.go:130] > # ]
	I0410 22:16:27.313934   40336 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0410 22:16:27.313947   40336 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0410 22:16:27.313957   40336 command_runner.go:130] > # Defaults to false.
	I0410 22:16:27.313969   40336 command_runner.go:130] > # device_ownership_from_security_context = false
	I0410 22:16:27.313981   40336 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0410 22:16:27.313995   40336 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0410 22:16:27.314005   40336 command_runner.go:130] > # hooks_dir = [
	I0410 22:16:27.314012   40336 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0410 22:16:27.314022   40336 command_runner.go:130] > # ]
	I0410 22:16:27.314032   40336 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0410 22:16:27.314045   40336 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0410 22:16:27.314057   40336 command_runner.go:130] > # its default mounts from the following two files:
	I0410 22:16:27.314062   40336 command_runner.go:130] > #
	I0410 22:16:27.314074   40336 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0410 22:16:27.314085   40336 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0410 22:16:27.314098   40336 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0410 22:16:27.314105   40336 command_runner.go:130] > #
	I0410 22:16:27.314116   40336 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0410 22:16:27.314130   40336 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0410 22:16:27.314143   40336 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0410 22:16:27.314155   40336 command_runner.go:130] > #      only add mounts it finds in this file.
	I0410 22:16:27.314171   40336 command_runner.go:130] > #
	I0410 22:16:27.314179   40336 command_runner.go:130] > # default_mounts_file = ""
	I0410 22:16:27.314191   40336 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0410 22:16:27.314205   40336 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0410 22:16:27.314215   40336 command_runner.go:130] > pids_limit = 1024
	I0410 22:16:27.314222   40336 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0410 22:16:27.314228   40336 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0410 22:16:27.314233   40336 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0410 22:16:27.314241   40336 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0410 22:16:27.314245   40336 command_runner.go:130] > # log_size_max = -1
	I0410 22:16:27.314256   40336 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0410 22:16:27.314264   40336 command_runner.go:130] > # log_to_journald = false
	I0410 22:16:27.314270   40336 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0410 22:16:27.314277   40336 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0410 22:16:27.314282   40336 command_runner.go:130] > # Path to directory for container attach sockets.
	I0410 22:16:27.314289   40336 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0410 22:16:27.314296   40336 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0410 22:16:27.314300   40336 command_runner.go:130] > # bind_mount_prefix = ""
	I0410 22:16:27.314309   40336 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0410 22:16:27.314314   40336 command_runner.go:130] > # read_only = false
	I0410 22:16:27.314320   40336 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0410 22:16:27.314329   40336 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0410 22:16:27.314333   40336 command_runner.go:130] > # live configuration reload.
	I0410 22:16:27.314337   40336 command_runner.go:130] > # log_level = "info"
	I0410 22:16:27.314344   40336 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0410 22:16:27.314349   40336 command_runner.go:130] > # This option supports live configuration reload.
	I0410 22:16:27.314355   40336 command_runner.go:130] > # log_filter = ""
	I0410 22:16:27.314361   40336 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0410 22:16:27.314369   40336 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0410 22:16:27.314373   40336 command_runner.go:130] > # separated by comma.
	I0410 22:16:27.314383   40336 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0410 22:16:27.314389   40336 command_runner.go:130] > # uid_mappings = ""
	I0410 22:16:27.314394   40336 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0410 22:16:27.314403   40336 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0410 22:16:27.314409   40336 command_runner.go:130] > # separated by comma.
	I0410 22:16:27.314416   40336 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0410 22:16:27.314424   40336 command_runner.go:130] > # gid_mappings = ""
	I0410 22:16:27.314436   40336 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0410 22:16:27.314449   40336 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0410 22:16:27.314465   40336 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0410 22:16:27.314475   40336 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0410 22:16:27.314481   40336 command_runner.go:130] > # minimum_mappable_uid = -1
	I0410 22:16:27.314491   40336 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0410 22:16:27.314504   40336 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0410 22:16:27.314518   40336 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0410 22:16:27.314531   40336 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0410 22:16:27.314543   40336 command_runner.go:130] > # minimum_mappable_gid = -1
	I0410 22:16:27.314550   40336 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0410 22:16:27.314556   40336 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0410 22:16:27.314566   40336 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0410 22:16:27.314576   40336 command_runner.go:130] > # ctr_stop_timeout = 30
	I0410 22:16:27.314588   40336 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0410 22:16:27.314601   40336 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0410 22:16:27.314613   40336 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0410 22:16:27.314620   40336 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0410 22:16:27.314630   40336 command_runner.go:130] > drop_infra_ctr = false
	I0410 22:16:27.314643   40336 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0410 22:16:27.314655   40336 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0410 22:16:27.314667   40336 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0410 22:16:27.314676   40336 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0410 22:16:27.314687   40336 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0410 22:16:27.314700   40336 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0410 22:16:27.314711   40336 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0410 22:16:27.314723   40336 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0410 22:16:27.314732   40336 command_runner.go:130] > # shared_cpuset = ""
	I0410 22:16:27.314738   40336 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0410 22:16:27.314743   40336 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0410 22:16:27.314747   40336 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0410 22:16:27.314754   40336 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0410 22:16:27.314758   40336 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0410 22:16:27.314763   40336 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0410 22:16:27.314769   40336 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0410 22:16:27.314775   40336 command_runner.go:130] > # enable_criu_support = false
	I0410 22:16:27.314781   40336 command_runner.go:130] > # Enable/disable the generation of the container,
	I0410 22:16:27.314791   40336 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0410 22:16:27.314798   40336 command_runner.go:130] > # enable_pod_events = false
	I0410 22:16:27.314804   40336 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0410 22:16:27.314812   40336 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0410 22:16:27.314819   40336 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0410 22:16:27.314823   40336 command_runner.go:130] > # default_runtime = "runc"
	I0410 22:16:27.314830   40336 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0410 22:16:27.314844   40336 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0410 22:16:27.314876   40336 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0410 22:16:27.314893   40336 command_runner.go:130] > # creation as a file is not desired either.
	I0410 22:16:27.314908   40336 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0410 22:16:27.314920   40336 command_runner.go:130] > # the hostname is being managed dynamically.
	I0410 22:16:27.314929   40336 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0410 22:16:27.314937   40336 command_runner.go:130] > # ]
	I0410 22:16:27.314947   40336 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0410 22:16:27.314960   40336 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0410 22:16:27.314969   40336 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0410 22:16:27.314974   40336 command_runner.go:130] > # Each entry in the table should follow the format:
	I0410 22:16:27.314979   40336 command_runner.go:130] > #
	I0410 22:16:27.314984   40336 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0410 22:16:27.314991   40336 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0410 22:16:27.315031   40336 command_runner.go:130] > # runtime_type = "oci"
	I0410 22:16:27.315038   40336 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0410 22:16:27.315046   40336 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0410 22:16:27.315056   40336 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0410 22:16:27.315065   40336 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0410 22:16:27.315074   40336 command_runner.go:130] > # monitor_env = []
	I0410 22:16:27.315083   40336 command_runner.go:130] > # privileged_without_host_devices = false
	I0410 22:16:27.315092   40336 command_runner.go:130] > # allowed_annotations = []
	I0410 22:16:27.315100   40336 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0410 22:16:27.315110   40336 command_runner.go:130] > # Where:
	I0410 22:16:27.315119   40336 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0410 22:16:27.315133   40336 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0410 22:16:27.315146   40336 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0410 22:16:27.315158   40336 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0410 22:16:27.315168   40336 command_runner.go:130] > #   in $PATH.
	I0410 22:16:27.315178   40336 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0410 22:16:27.315190   40336 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0410 22:16:27.315206   40336 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0410 22:16:27.315215   40336 command_runner.go:130] > #   state.
	I0410 22:16:27.315226   40336 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0410 22:16:27.315238   40336 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0410 22:16:27.315248   40336 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0410 22:16:27.315256   40336 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0410 22:16:27.315267   40336 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0410 22:16:27.315275   40336 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0410 22:16:27.315281   40336 command_runner.go:130] > #   The currently recognized values are:
	I0410 22:16:27.315294   40336 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0410 22:16:27.315309   40336 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0410 22:16:27.315322   40336 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0410 22:16:27.315335   40336 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0410 22:16:27.315349   40336 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0410 22:16:27.315364   40336 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0410 22:16:27.315377   40336 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0410 22:16:27.315391   40336 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0410 22:16:27.315405   40336 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0410 22:16:27.315418   40336 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0410 22:16:27.315429   40336 command_runner.go:130] > #   deprecated option "conmon".
	I0410 22:16:27.315442   40336 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0410 22:16:27.315452   40336 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0410 22:16:27.315465   40336 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0410 22:16:27.315476   40336 command_runner.go:130] > #   should be moved to the container's cgroup
	I0410 22:16:27.315491   40336 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0410 22:16:27.315501   40336 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0410 22:16:27.315514   40336 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0410 22:16:27.315526   40336 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0410 22:16:27.315535   40336 command_runner.go:130] > #
	I0410 22:16:27.315542   40336 command_runner.go:130] > # Using the seccomp notifier feature:
	I0410 22:16:27.315550   40336 command_runner.go:130] > #
	I0410 22:16:27.315561   40336 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0410 22:16:27.315575   40336 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0410 22:16:27.315582   40336 command_runner.go:130] > #
	I0410 22:16:27.315596   40336 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0410 22:16:27.315609   40336 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0410 22:16:27.315617   40336 command_runner.go:130] > #
	I0410 22:16:27.315627   40336 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0410 22:16:27.315634   40336 command_runner.go:130] > # feature.
	I0410 22:16:27.315643   40336 command_runner.go:130] > #
	I0410 22:16:27.315654   40336 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0410 22:16:27.315668   40336 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0410 22:16:27.315686   40336 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0410 22:16:27.315700   40336 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0410 22:16:27.315710   40336 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0410 22:16:27.315716   40336 command_runner.go:130] > #
	I0410 22:16:27.315726   40336 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0410 22:16:27.315739   40336 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0410 22:16:27.315748   40336 command_runner.go:130] > #
	I0410 22:16:27.315757   40336 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0410 22:16:27.315769   40336 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0410 22:16:27.315777   40336 command_runner.go:130] > #
	I0410 22:16:27.315786   40336 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0410 22:16:27.315797   40336 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0410 22:16:27.315801   40336 command_runner.go:130] > # limitation.
	I0410 22:16:27.315810   40336 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0410 22:16:27.315821   40336 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0410 22:16:27.315831   40336 command_runner.go:130] > runtime_type = "oci"
	I0410 22:16:27.315841   40336 command_runner.go:130] > runtime_root = "/run/runc"
	I0410 22:16:27.315850   40336 command_runner.go:130] > runtime_config_path = ""
	I0410 22:16:27.315862   40336 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0410 22:16:27.315871   40336 command_runner.go:130] > monitor_cgroup = "pod"
	I0410 22:16:27.315880   40336 command_runner.go:130] > monitor_exec_cgroup = ""
	I0410 22:16:27.315886   40336 command_runner.go:130] > monitor_env = [
	I0410 22:16:27.315894   40336 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0410 22:16:27.315902   40336 command_runner.go:130] > ]
	I0410 22:16:27.315911   40336 command_runner.go:130] > privileged_without_host_devices = false
	I0410 22:16:27.315925   40336 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0410 22:16:27.315937   40336 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0410 22:16:27.315950   40336 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0410 22:16:27.315964   40336 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0410 22:16:27.315974   40336 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0410 22:16:27.315986   40336 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0410 22:16:27.316007   40336 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0410 22:16:27.316023   40336 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0410 22:16:27.316035   40336 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0410 22:16:27.316050   40336 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0410 22:16:27.316056   40336 command_runner.go:130] > # Example:
	I0410 22:16:27.316065   40336 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0410 22:16:27.316078   40336 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0410 22:16:27.316090   40336 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0410 22:16:27.316101   40336 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0410 22:16:27.316109   40336 command_runner.go:130] > # cpuset = 0
	I0410 22:16:27.316118   40336 command_runner.go:130] > # cpushares = "0-1"
	I0410 22:16:27.316125   40336 command_runner.go:130] > # Where:
	I0410 22:16:27.316135   40336 command_runner.go:130] > # The workload name is workload-type.
	I0410 22:16:27.316145   40336 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0410 22:16:27.316155   40336 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0410 22:16:27.316168   40336 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0410 22:16:27.316184   40336 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0410 22:16:27.316196   40336 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0410 22:16:27.316207   40336 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0410 22:16:27.316223   40336 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0410 22:16:27.316230   40336 command_runner.go:130] > # Default value is set to true
	I0410 22:16:27.316235   40336 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0410 22:16:27.316248   40336 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0410 22:16:27.316260   40336 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0410 22:16:27.316271   40336 command_runner.go:130] > # Default value is set to 'false'
	I0410 22:16:27.316281   40336 command_runner.go:130] > # disable_hostport_mapping = false
	I0410 22:16:27.316294   40336 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0410 22:16:27.316302   40336 command_runner.go:130] > #
	I0410 22:16:27.316313   40336 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0410 22:16:27.316323   40336 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0410 22:16:27.316336   40336 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0410 22:16:27.316349   40336 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0410 22:16:27.316358   40336 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0410 22:16:27.316364   40336 command_runner.go:130] > [crio.image]
	I0410 22:16:27.316373   40336 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0410 22:16:27.316379   40336 command_runner.go:130] > # default_transport = "docker://"
	I0410 22:16:27.316391   40336 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0410 22:16:27.316412   40336 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0410 22:16:27.316420   40336 command_runner.go:130] > # global_auth_file = ""
	I0410 22:16:27.316429   40336 command_runner.go:130] > # The image used to instantiate infra containers.
	I0410 22:16:27.316437   40336 command_runner.go:130] > # This option supports live configuration reload.
	I0410 22:16:27.316450   40336 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0410 22:16:27.316467   40336 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0410 22:16:27.316480   40336 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0410 22:16:27.316491   40336 command_runner.go:130] > # This option supports live configuration reload.
	I0410 22:16:27.316502   40336 command_runner.go:130] > # pause_image_auth_file = ""
	I0410 22:16:27.316514   40336 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0410 22:16:27.316527   40336 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0410 22:16:27.316540   40336 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0410 22:16:27.316553   40336 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0410 22:16:27.316561   40336 command_runner.go:130] > # pause_command = "/pause"
	I0410 22:16:27.316567   40336 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0410 22:16:27.316579   40336 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0410 22:16:27.316593   40336 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0410 22:16:27.316606   40336 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0410 22:16:27.316618   40336 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0410 22:16:27.316631   40336 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0410 22:16:27.316641   40336 command_runner.go:130] > # pinned_images = [
	I0410 22:16:27.316647   40336 command_runner.go:130] > # ]
	I0410 22:16:27.316653   40336 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0410 22:16:27.316667   40336 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0410 22:16:27.316681   40336 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0410 22:16:27.316694   40336 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0410 22:16:27.316706   40336 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0410 22:16:27.316715   40336 command_runner.go:130] > # signature_policy = ""
	I0410 22:16:27.316726   40336 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0410 22:16:27.316736   40336 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0410 22:16:27.316746   40336 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0410 22:16:27.316760   40336 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0410 22:16:27.316773   40336 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0410 22:16:27.316783   40336 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0410 22:16:27.316799   40336 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0410 22:16:27.316812   40336 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0410 22:16:27.316819   40336 command_runner.go:130] > # changing them here.
	I0410 22:16:27.316823   40336 command_runner.go:130] > # insecure_registries = [
	I0410 22:16:27.316831   40336 command_runner.go:130] > # ]
	I0410 22:16:27.316841   40336 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0410 22:16:27.316854   40336 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0410 22:16:27.316864   40336 command_runner.go:130] > # image_volumes = "mkdir"
	I0410 22:16:27.316876   40336 command_runner.go:130] > # Temporary directory to use for storing big files
	I0410 22:16:27.316886   40336 command_runner.go:130] > # big_files_temporary_dir = ""
	I0410 22:16:27.316899   40336 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0410 22:16:27.316906   40336 command_runner.go:130] > # CNI plugins.
	I0410 22:16:27.316910   40336 command_runner.go:130] > [crio.network]
	I0410 22:16:27.316917   40336 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0410 22:16:27.316923   40336 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0410 22:16:27.316928   40336 command_runner.go:130] > # cni_default_network = ""
	I0410 22:16:27.316937   40336 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0410 22:16:27.316948   40336 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0410 22:16:27.316961   40336 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0410 22:16:27.316970   40336 command_runner.go:130] > # plugin_dirs = [
	I0410 22:16:27.316979   40336 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0410 22:16:27.316987   40336 command_runner.go:130] > # ]
	I0410 22:16:27.316999   40336 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0410 22:16:27.317007   40336 command_runner.go:130] > [crio.metrics]
	I0410 22:16:27.317018   40336 command_runner.go:130] > # Globally enable or disable metrics support.
	I0410 22:16:27.317024   40336 command_runner.go:130] > enable_metrics = true
	I0410 22:16:27.317034   40336 command_runner.go:130] > # Specify enabled metrics collectors.
	I0410 22:16:27.317046   40336 command_runner.go:130] > # Per default all metrics are enabled.
	I0410 22:16:27.317058   40336 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0410 22:16:27.317071   40336 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0410 22:16:27.317084   40336 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0410 22:16:27.317100   40336 command_runner.go:130] > # metrics_collectors = [
	I0410 22:16:27.317110   40336 command_runner.go:130] > # 	"operations",
	I0410 22:16:27.317121   40336 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0410 22:16:27.317132   40336 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0410 22:16:27.317141   40336 command_runner.go:130] > # 	"operations_errors",
	I0410 22:16:27.317152   40336 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0410 22:16:27.317162   40336 command_runner.go:130] > # 	"image_pulls_by_name",
	I0410 22:16:27.317172   40336 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0410 22:16:27.317178   40336 command_runner.go:130] > # 	"image_pulls_failures",
	I0410 22:16:27.317183   40336 command_runner.go:130] > # 	"image_pulls_successes",
	I0410 22:16:27.317190   40336 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0410 22:16:27.317197   40336 command_runner.go:130] > # 	"image_layer_reuse",
	I0410 22:16:27.317204   40336 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0410 22:16:27.317210   40336 command_runner.go:130] > # 	"containers_oom_total",
	I0410 22:16:27.317217   40336 command_runner.go:130] > # 	"containers_oom",
	I0410 22:16:27.317221   40336 command_runner.go:130] > # 	"processes_defunct",
	I0410 22:16:27.317228   40336 command_runner.go:130] > # 	"operations_total",
	I0410 22:16:27.317232   40336 command_runner.go:130] > # 	"operations_latency_seconds",
	I0410 22:16:27.317239   40336 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0410 22:16:27.317243   40336 command_runner.go:130] > # 	"operations_errors_total",
	I0410 22:16:27.317249   40336 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0410 22:16:27.317254   40336 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0410 22:16:27.317260   40336 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0410 22:16:27.317265   40336 command_runner.go:130] > # 	"image_pulls_success_total",
	I0410 22:16:27.317271   40336 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0410 22:16:27.317276   40336 command_runner.go:130] > # 	"containers_oom_count_total",
	I0410 22:16:27.317283   40336 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0410 22:16:27.317287   40336 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0410 22:16:27.317295   40336 command_runner.go:130] > # ]
	I0410 22:16:27.317306   40336 command_runner.go:130] > # The port on which the metrics server will listen.
	I0410 22:16:27.317316   40336 command_runner.go:130] > # metrics_port = 9090
	I0410 22:16:27.317326   40336 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0410 22:16:27.317336   40336 command_runner.go:130] > # metrics_socket = ""
	I0410 22:16:27.317347   40336 command_runner.go:130] > # The certificate for the secure metrics server.
	I0410 22:16:27.317362   40336 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0410 22:16:27.317375   40336 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0410 22:16:27.317385   40336 command_runner.go:130] > # certificate on any modification event.
	I0410 22:16:27.317392   40336 command_runner.go:130] > # metrics_cert = ""
	I0410 22:16:27.317397   40336 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0410 22:16:27.317403   40336 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0410 22:16:27.317407   40336 command_runner.go:130] > # metrics_key = ""
	I0410 22:16:27.317415   40336 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0410 22:16:27.317419   40336 command_runner.go:130] > [crio.tracing]
	I0410 22:16:27.317427   40336 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0410 22:16:27.317433   40336 command_runner.go:130] > # enable_tracing = false
	I0410 22:16:27.317439   40336 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0410 22:16:27.317445   40336 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0410 22:16:27.317452   40336 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0410 22:16:27.317463   40336 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0410 22:16:27.317468   40336 command_runner.go:130] > # CRI-O NRI configuration.
	I0410 22:16:27.317474   40336 command_runner.go:130] > [crio.nri]
	I0410 22:16:27.317478   40336 command_runner.go:130] > # Globally enable or disable NRI.
	I0410 22:16:27.317485   40336 command_runner.go:130] > # enable_nri = false
	I0410 22:16:27.317489   40336 command_runner.go:130] > # NRI socket to listen on.
	I0410 22:16:27.317496   40336 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0410 22:16:27.317500   40336 command_runner.go:130] > # NRI plugin directory to use.
	I0410 22:16:27.317507   40336 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0410 22:16:27.317515   40336 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0410 22:16:27.317522   40336 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0410 22:16:27.317528   40336 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0410 22:16:27.317534   40336 command_runner.go:130] > # nri_disable_connections = false
	I0410 22:16:27.317539   40336 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0410 22:16:27.317546   40336 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0410 22:16:27.317551   40336 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0410 22:16:27.317558   40336 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0410 22:16:27.317564   40336 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0410 22:16:27.317570   40336 command_runner.go:130] > [crio.stats]
	I0410 22:16:27.317576   40336 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0410 22:16:27.317583   40336 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0410 22:16:27.317590   40336 command_runner.go:130] > # stats_collection_period = 0
	I0410 22:16:27.317615   40336 command_runner.go:130] ! time="2024-04-10 22:16:27.279686821Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0410 22:16:27.317629   40336 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0410 22:16:27.317713   40336 cni.go:84] Creating CNI manager for ""
	I0410 22:16:27.317723   40336 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0410 22:16:27.317730   40336 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 22:16:27.317758   40336 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.94 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-824789 NodeName:multinode-824789 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.94"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.94 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0410 22:16:27.317871   40336 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.94
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-824789"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.94
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.94"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 22:16:27.317929   40336 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0410 22:16:27.331262   40336 command_runner.go:130] > kubeadm
	I0410 22:16:27.331278   40336 command_runner.go:130] > kubectl
	I0410 22:16:27.331282   40336 command_runner.go:130] > kubelet
	I0410 22:16:27.331625   40336 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 22:16:27.331671   40336 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 22:16:27.343757   40336 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0410 22:16:27.362930   40336 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0410 22:16:27.385496   40336 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0410 22:16:27.404743   40336 ssh_runner.go:195] Run: grep 192.168.39.94	control-plane.minikube.internal$ /etc/hosts
	I0410 22:16:27.409081   40336 command_runner.go:130] > 192.168.39.94	control-plane.minikube.internal
	I0410 22:16:27.409143   40336 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:16:27.574783   40336 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:16:27.591631   40336 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/multinode-824789 for IP: 192.168.39.94
	I0410 22:16:27.591654   40336 certs.go:194] generating shared ca certs ...
	I0410 22:16:27.591672   40336 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:16:27.591831   40336 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 22:16:27.591883   40336 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 22:16:27.591897   40336 certs.go:256] generating profile certs ...
	I0410 22:16:27.591977   40336 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/multinode-824789/client.key
	I0410 22:16:27.592057   40336 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/multinode-824789/apiserver.key.7681d9ce
	I0410 22:16:27.592110   40336 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/multinode-824789/proxy-client.key
	I0410 22:16:27.592125   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0410 22:16:27.592152   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0410 22:16:27.592173   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0410 22:16:27.592191   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0410 22:16:27.592210   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/multinode-824789/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0410 22:16:27.592231   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/multinode-824789/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0410 22:16:27.592250   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/multinode-824789/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0410 22:16:27.592268   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/multinode-824789/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0410 22:16:27.592339   40336 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 22:16:27.592378   40336 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 22:16:27.592392   40336 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 22:16:27.592447   40336 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 22:16:27.592481   40336 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 22:16:27.592512   40336 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 22:16:27.592565   40336 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:16:27.592606   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:16:27.592625   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem -> /usr/share/ca-certificates/13001.pem
	I0410 22:16:27.592644   40336 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> /usr/share/ca-certificates/130012.pem
	I0410 22:16:27.593191   40336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 22:16:27.619566   40336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 22:16:27.644253   40336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 22:16:27.668991   40336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 22:16:27.693015   40336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/multinode-824789/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0410 22:16:27.717778   40336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/multinode-824789/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0410 22:16:27.742419   40336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/multinode-824789/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 22:16:27.768244   40336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/multinode-824789/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0410 22:16:27.793137   40336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 22:16:27.821674   40336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 22:16:27.846486   40336 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 22:16:27.871860   40336 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 22:16:27.889082   40336 ssh_runner.go:195] Run: openssl version
	I0410 22:16:27.894926   40336 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0410 22:16:27.895099   40336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 22:16:27.906910   40336 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:16:27.911293   40336 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:16:27.911489   40336 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:16:27.911527   40336 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:16:27.917276   40336 command_runner.go:130] > b5213941
	I0410 22:16:27.917330   40336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 22:16:27.926993   40336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 22:16:27.938214   40336 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 22:16:27.942742   40336 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:16:27.942858   40336 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:16:27.942911   40336 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 22:16:27.949287   40336 command_runner.go:130] > 51391683
	I0410 22:16:27.949344   40336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 22:16:27.959079   40336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 22:16:27.970339   40336 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 22:16:27.975028   40336 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:16:27.975070   40336 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:16:27.975118   40336 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 22:16:27.980873   40336 command_runner.go:130] > 3ec20f2e
	I0410 22:16:27.980987   40336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 22:16:27.990284   40336 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:16:27.994873   40336 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:16:27.994900   40336 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0410 22:16:27.994909   40336 command_runner.go:130] > Device: 253,1	Inode: 6292486     Links: 1
	I0410 22:16:27.994918   40336 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0410 22:16:27.994932   40336 command_runner.go:130] > Access: 2024-04-10 22:10:14.865982874 +0000
	I0410 22:16:27.994943   40336 command_runner.go:130] > Modify: 2024-04-10 22:10:14.865982874 +0000
	I0410 22:16:27.994950   40336 command_runner.go:130] > Change: 2024-04-10 22:10:14.865982874 +0000
	I0410 22:16:27.994967   40336 command_runner.go:130] >  Birth: 2024-04-10 22:10:14.865982874 +0000
	I0410 22:16:27.995009   40336 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0410 22:16:28.000648   40336 command_runner.go:130] > Certificate will not expire
	I0410 22:16:28.000706   40336 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0410 22:16:28.006701   40336 command_runner.go:130] > Certificate will not expire
	I0410 22:16:28.006850   40336 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0410 22:16:28.012635   40336 command_runner.go:130] > Certificate will not expire
	I0410 22:16:28.012703   40336 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0410 22:16:28.018171   40336 command_runner.go:130] > Certificate will not expire
	I0410 22:16:28.018643   40336 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0410 22:16:28.024104   40336 command_runner.go:130] > Certificate will not expire
	I0410 22:16:28.024245   40336 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0410 22:16:28.029660   40336 command_runner.go:130] > Certificate will not expire
	I0410 22:16:28.029856   40336 kubeadm.go:391] StartCluster: {Name:multinode-824789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:multinode-824789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.85 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.224 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:16:28.029997   40336 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 22:16:28.030044   40336 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:16:28.069743   40336 command_runner.go:130] > 559d8ae61200e3ba5d2a71f3c2058d4f2b1af0bedb839a2a8271d366e75a24fa
	I0410 22:16:28.069775   40336 command_runner.go:130] > c7dc29ebd6ee440c1b6ce074966c0ed7caf78253e19d0f5232e7950c151fc4ce
	I0410 22:16:28.069784   40336 command_runner.go:130] > 6b912245ff19931ce5c9e4bc61ff0b004c72ae768f856684259bfb4db2fb768b
	I0410 22:16:28.069794   40336 command_runner.go:130] > 6d0d4dd9273967b484f51f92f7e12ada0b0ca391d33518f7cdc8a9ded534e23c
	I0410 22:16:28.069802   40336 command_runner.go:130] > 2541b56a95637bf11f9ee6531105e8bdfd19cae8b2b4e1c40a718f501ba9904a
	I0410 22:16:28.069810   40336 command_runner.go:130] > cbf4abb7ad40ea21bcf22323179f9b10801504264e307d41d755d3e5f8b8e5e9
	I0410 22:16:28.069823   40336 command_runner.go:130] > 33e5663b850f38138c30aad20177833de666a6e18c80807aeffee6fa023c2660
	I0410 22:16:28.069833   40336 command_runner.go:130] > 8486ace19c17159f5709414d67c7af1364c7de18561079ec6a14f197eb911515
	I0410 22:16:28.070023   40336 cri.go:89] found id: "559d8ae61200e3ba5d2a71f3c2058d4f2b1af0bedb839a2a8271d366e75a24fa"
	I0410 22:16:28.070038   40336 cri.go:89] found id: "c7dc29ebd6ee440c1b6ce074966c0ed7caf78253e19d0f5232e7950c151fc4ce"
	I0410 22:16:28.070044   40336 cri.go:89] found id: "6b912245ff19931ce5c9e4bc61ff0b004c72ae768f856684259bfb4db2fb768b"
	I0410 22:16:28.070048   40336 cri.go:89] found id: "6d0d4dd9273967b484f51f92f7e12ada0b0ca391d33518f7cdc8a9ded534e23c"
	I0410 22:16:28.070053   40336 cri.go:89] found id: "2541b56a95637bf11f9ee6531105e8bdfd19cae8b2b4e1c40a718f501ba9904a"
	I0410 22:16:28.070057   40336 cri.go:89] found id: "cbf4abb7ad40ea21bcf22323179f9b10801504264e307d41d755d3e5f8b8e5e9"
	I0410 22:16:28.070062   40336 cri.go:89] found id: "33e5663b850f38138c30aad20177833de666a6e18c80807aeffee6fa023c2660"
	I0410 22:16:28.070065   40336 cri.go:89] found id: "8486ace19c17159f5709414d67c7af1364c7de18561079ec6a14f197eb911515"
	I0410 22:16:28.070069   40336 cri.go:89] found id: ""
	I0410 22:16:28.070120   40336 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 10 22:20:20 multinode-824789 crio[2854]: time="2024-04-10 22:20:20.471942074Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712787620471920935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f4d11886-9545-4794-a7cb-0ef88cd7bb9e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:20:20 multinode-824789 crio[2854]: time="2024-04-10 22:20:20.472473104Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=38e6a9c1-b043-4e55-800b-0a03770e53a2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:20:20 multinode-824789 crio[2854]: time="2024-04-10 22:20:20.472552296Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=38e6a9c1-b043-4e55-800b-0a03770e53a2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:20:20 multinode-824789 crio[2854]: time="2024-04-10 22:20:20.472896892Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b69ab1eeb616d179fe5a8784376b2875967c0592e582f61eea38471c80e3e84,PodSandboxId:ab19cca9130746bdf30fe7833dd218299d58facbdcb869c0bfd99da0473bb785,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712787427888182402,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-k2ds9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f84d3580-83d9-497d-bc27-9d1112849093,},Annotations:map[string]string{io.kubernetes.container.hash: c5c43c7e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1452163a5519628d53512c6cdfa710d4393fba40d50434f11f2e79a552f23512,PodSandboxId:c50458e2b81378eb737e89c103b2eb1f14cca493f4e6be985045ad1e173d463f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712787394376623525,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wtnkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7169290a-557c-4861-8ecd-e2a0b2c0b290,},Annotations:map[string]string{io.kubernetes.container.hash: 18c7ea1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95fa99a9b394acfb70cf2d3bc625515f04ac5bbaf1e83ce8ed837895f8ed2711,PodSandboxId:8fc64b964d3c6debfb4197aab2b5454bcbbe22981c74b15086aa8bc000bec36e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712787394236845048,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-q2q8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e335e4d5-f65f-4722-b2c1-60e22cd08383,},Annotations:map[string]string{io.kubernetes.container.hash: 4b782acf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9766617e949fbcca21ed32d98fe5425705562bbf2b80ced264099a5262049093,PodSandboxId:d0bb5e97176be6754b1a25e9382b9af152484d18499d23f2c607e90826f1faf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712787394149865567,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e571cab5-3579-4616-90f8-a9c465e70ace,},An
notations:map[string]string{io.kubernetes.container.hash: c59df8f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153b13801dcbfa0b0df8df6c049f8c0b02d3726f6fca41e1d3375d394d55c529,PodSandboxId:a4727b3c277b2644be6addc77a5f5ee7f174daf03a364aed4120671cf62f5e3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712787394141758693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jczhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bc151d6-2081-4f28-80d9-f5bbc795697e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 47cf10f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:539e39c1eb16e404b9f016c66bfa0a50882f7a3f450a45b5430e466e766c4d1a,PodSandboxId:7657c2ef27ad1e2c2c39ceadc957c9aa5b99c3b4931db10099ff33156b8b02d4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712787390419753041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b6548b0f76d3607d58faa9b3e608948,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f09dfe1ad20f92ced33fc247582ae9805c5208dcfdbbb61996b36c12d765d0f9,PodSandboxId:2629bdf637ddfa1fcc4a0230b1b72bc7b8f3ac51064234c275de75f54c098810,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712787390369164206,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800bfb120fc35f1c411b49e7bd24fc4,},Annotations:map[string]string{io.kub
ernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9024d499796903211265b58e900d4530ff4d8f95c482563d1fc88b6a568e3909,PodSandboxId:cd1042b2912ee16daf10d32a2b4062812d624599cc0367c7d719e0a669e27a52,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712787390378869061,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12b4e0e3d4dfd3581ea04dc539f54186,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7f29e9a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e61f191e2e49cdf6315e1e237ffb6d7db9738e9a42cf5ba7ee189377861f57,PodSandboxId:cb438d7c83c336ca9de1cca90cabe7562df21c8e04b46646ba3dc228e6c75c27,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712787390348573076,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b2c1d24c176a5f0fdc05076676f83e4,},Annotations:map[string]string{io.kubernetes.container.hash: 54acc59a,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7fb0eb1db503999354fe6d2250ddc1eb8b4a81807d10d1f2074ee34c0f60b7,PodSandboxId:d4bf1d7c408122c41e63d9e84a828c448900e752841e54344d17667ed7cbc693,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712787086478490953,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-k2ds9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f84d3580-83d9-497d-bc27-9d1112849093,},Annotations:map[string]string{io.kubernetes.container.hash: c5c43c7e,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:559d8ae61200e3ba5d2a71f3c2058d4f2b1af0bedb839a2a8271d366e75a24fa,PodSandboxId:97fdf93300610640a55f8d9f26679f11ab52b106d47a3c55393773a007cbdbce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712787040069753134,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e571cab5-3579-4616-90f8-a9c465e70ace,},Annotations:map[string]string{io.kubernetes.container.hash: c59df8f7,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7dc29ebd6ee440c1b6ce074966c0ed7caf78253e19d0f5232e7950c151fc4ce,PodSandboxId:8bbc7f26b3f24c6861865c8af008cf9dd186259b174c0903892d9620a11fa194,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712787039290453019,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-q2q8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e335e4d5-f65f-4722-b2c1-60e22cd08383,},Annotations:map[string]string{io.kubernetes.container.hash: 4b782acf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b912245ff19931ce5c9e4bc61ff0b004c72ae768f856684259bfb4db2fb768b,PodSandboxId:9a40c2487b0b89099cb2ad8a18821f8a5444c70f640b534d814587812abcbc1e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712787037548947345,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wtnkq,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 7169290a-557c-4861-8ecd-e2a0b2c0b290,},Annotations:map[string]string{io.kubernetes.container.hash: 18c7ea1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0d4dd9273967b484f51f92f7e12ada0b0ca391d33518f7cdc8a9ded534e23c,PodSandboxId:a4899072a08fffc872259437814850aecb1970fc1a147fae53163f8bc6ae6e65,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712787037388503224,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jczhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bc151d6-2081-4f28-80d9
-f5bbc795697e,},Annotations:map[string]string{io.kubernetes.container.hash: 47cf10f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2541b56a95637bf11f9ee6531105e8bdfd19cae8b2b4e1c40a718f501ba9904a,PodSandboxId:c70ffd4456f7d99f07f71ea04094fe095d660779f4ccd28c1a07809a6301fd5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712787018157395824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800bfb
120fc35f1c411b49e7bd24fc4,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33e5663b850f38138c30aad20177833de666a6e18c80807aeffee6fa023c2660,PodSandboxId:56ccb96fb9f1e89f8fb395d97342853122bd17c5680301c02a718eeb92d6f288,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712787018142947435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b2c1d24c176a5f0fdc05076676f83e4,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 54acc59a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf4abb7ad40ea21bcf22323179f9b10801504264e307d41d755d3e5f8b8e5e9,PodSandboxId:e55cc501e3962a8f290d13b7b6163694efdf66a722479c70c66c3181349addbd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712787018144622016,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b6548b0f76d3607d58faa9b3e608948,},Annotations:map[string]string{io
.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8486ace19c17159f5709414d67c7af1364c7de18561079ec6a14f197eb911515,PodSandboxId:136bc181084da378703a45f321e6e2b0e3b00a5fa6706a8f58b77d519a60fd70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712787018134560832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12b4e0e3d4dfd3581ea04dc539f54186,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7f29e9a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=38e6a9c1-b043-4e55-800b-0a03770e53a2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:20:20 multinode-824789 crio[2854]: time="2024-04-10 22:20:20.518271814Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4eb281cb-8664-4ca9-b97c-373e1273e35f name=/runtime.v1.RuntimeService/Version
	Apr 10 22:20:20 multinode-824789 crio[2854]: time="2024-04-10 22:20:20.518349320Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4eb281cb-8664-4ca9-b97c-373e1273e35f name=/runtime.v1.RuntimeService/Version
	Apr 10 22:20:20 multinode-824789 crio[2854]: time="2024-04-10 22:20:20.519796254Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=87b00e15-c044-40cc-a56b-524c3697bfa7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:20:20 multinode-824789 crio[2854]: time="2024-04-10 22:20:20.520329502Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712787620520305692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87b00e15-c044-40cc-a56b-524c3697bfa7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:20:20 multinode-824789 crio[2854]: time="2024-04-10 22:20:20.521175459Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f615b222-ee6f-4f2f-b1d7-2f446e04aa86 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:20:20 multinode-824789 crio[2854]: time="2024-04-10 22:20:20.521228683Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f615b222-ee6f-4f2f-b1d7-2f446e04aa86 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:20:20 multinode-824789 crio[2854]: time="2024-04-10 22:20:20.521918734Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b69ab1eeb616d179fe5a8784376b2875967c0592e582f61eea38471c80e3e84,PodSandboxId:ab19cca9130746bdf30fe7833dd218299d58facbdcb869c0bfd99da0473bb785,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712787427888182402,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-k2ds9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f84d3580-83d9-497d-bc27-9d1112849093,},Annotations:map[string]string{io.kubernetes.container.hash: c5c43c7e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1452163a5519628d53512c6cdfa710d4393fba40d50434f11f2e79a552f23512,PodSandboxId:c50458e2b81378eb737e89c103b2eb1f14cca493f4e6be985045ad1e173d463f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712787394376623525,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wtnkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7169290a-557c-4861-8ecd-e2a0b2c0b290,},Annotations:map[string]string{io.kubernetes.container.hash: 18c7ea1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95fa99a9b394acfb70cf2d3bc625515f04ac5bbaf1e83ce8ed837895f8ed2711,PodSandboxId:8fc64b964d3c6debfb4197aab2b5454bcbbe22981c74b15086aa8bc000bec36e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712787394236845048,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-q2q8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e335e4d5-f65f-4722-b2c1-60e22cd08383,},Annotations:map[string]string{io.kubernetes.container.hash: 4b782acf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9766617e949fbcca21ed32d98fe5425705562bbf2b80ced264099a5262049093,PodSandboxId:d0bb5e97176be6754b1a25e9382b9af152484d18499d23f2c607e90826f1faf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712787394149865567,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e571cab5-3579-4616-90f8-a9c465e70ace,},An
notations:map[string]string{io.kubernetes.container.hash: c59df8f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153b13801dcbfa0b0df8df6c049f8c0b02d3726f6fca41e1d3375d394d55c529,PodSandboxId:a4727b3c277b2644be6addc77a5f5ee7f174daf03a364aed4120671cf62f5e3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712787394141758693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jczhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bc151d6-2081-4f28-80d9-f5bbc795697e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 47cf10f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:539e39c1eb16e404b9f016c66bfa0a50882f7a3f450a45b5430e466e766c4d1a,PodSandboxId:7657c2ef27ad1e2c2c39ceadc957c9aa5b99c3b4931db10099ff33156b8b02d4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712787390419753041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b6548b0f76d3607d58faa9b3e608948,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f09dfe1ad20f92ced33fc247582ae9805c5208dcfdbbb61996b36c12d765d0f9,PodSandboxId:2629bdf637ddfa1fcc4a0230b1b72bc7b8f3ac51064234c275de75f54c098810,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712787390369164206,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800bfb120fc35f1c411b49e7bd24fc4,},Annotations:map[string]string{io.kub
ernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9024d499796903211265b58e900d4530ff4d8f95c482563d1fc88b6a568e3909,PodSandboxId:cd1042b2912ee16daf10d32a2b4062812d624599cc0367c7d719e0a669e27a52,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712787390378869061,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12b4e0e3d4dfd3581ea04dc539f54186,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7f29e9a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e61f191e2e49cdf6315e1e237ffb6d7db9738e9a42cf5ba7ee189377861f57,PodSandboxId:cb438d7c83c336ca9de1cca90cabe7562df21c8e04b46646ba3dc228e6c75c27,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712787390348573076,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b2c1d24c176a5f0fdc05076676f83e4,},Annotations:map[string]string{io.kubernetes.container.hash: 54acc59a,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7fb0eb1db503999354fe6d2250ddc1eb8b4a81807d10d1f2074ee34c0f60b7,PodSandboxId:d4bf1d7c408122c41e63d9e84a828c448900e752841e54344d17667ed7cbc693,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712787086478490953,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-k2ds9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f84d3580-83d9-497d-bc27-9d1112849093,},Annotations:map[string]string{io.kubernetes.container.hash: c5c43c7e,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:559d8ae61200e3ba5d2a71f3c2058d4f2b1af0bedb839a2a8271d366e75a24fa,PodSandboxId:97fdf93300610640a55f8d9f26679f11ab52b106d47a3c55393773a007cbdbce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712787040069753134,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e571cab5-3579-4616-90f8-a9c465e70ace,},Annotations:map[string]string{io.kubernetes.container.hash: c59df8f7,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7dc29ebd6ee440c1b6ce074966c0ed7caf78253e19d0f5232e7950c151fc4ce,PodSandboxId:8bbc7f26b3f24c6861865c8af008cf9dd186259b174c0903892d9620a11fa194,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712787039290453019,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-q2q8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e335e4d5-f65f-4722-b2c1-60e22cd08383,},Annotations:map[string]string{io.kubernetes.container.hash: 4b782acf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b912245ff19931ce5c9e4bc61ff0b004c72ae768f856684259bfb4db2fb768b,PodSandboxId:9a40c2487b0b89099cb2ad8a18821f8a5444c70f640b534d814587812abcbc1e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712787037548947345,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wtnkq,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 7169290a-557c-4861-8ecd-e2a0b2c0b290,},Annotations:map[string]string{io.kubernetes.container.hash: 18c7ea1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0d4dd9273967b484f51f92f7e12ada0b0ca391d33518f7cdc8a9ded534e23c,PodSandboxId:a4899072a08fffc872259437814850aecb1970fc1a147fae53163f8bc6ae6e65,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712787037388503224,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jczhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bc151d6-2081-4f28-80d9
-f5bbc795697e,},Annotations:map[string]string{io.kubernetes.container.hash: 47cf10f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2541b56a95637bf11f9ee6531105e8bdfd19cae8b2b4e1c40a718f501ba9904a,PodSandboxId:c70ffd4456f7d99f07f71ea04094fe095d660779f4ccd28c1a07809a6301fd5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712787018157395824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800bfb
120fc35f1c411b49e7bd24fc4,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33e5663b850f38138c30aad20177833de666a6e18c80807aeffee6fa023c2660,PodSandboxId:56ccb96fb9f1e89f8fb395d97342853122bd17c5680301c02a718eeb92d6f288,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712787018142947435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b2c1d24c176a5f0fdc05076676f83e4,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 54acc59a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf4abb7ad40ea21bcf22323179f9b10801504264e307d41d755d3e5f8b8e5e9,PodSandboxId:e55cc501e3962a8f290d13b7b6163694efdf66a722479c70c66c3181349addbd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712787018144622016,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b6548b0f76d3607d58faa9b3e608948,},Annotations:map[string]string{io
.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8486ace19c17159f5709414d67c7af1364c7de18561079ec6a14f197eb911515,PodSandboxId:136bc181084da378703a45f321e6e2b0e3b00a5fa6706a8f58b77d519a60fd70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712787018134560832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12b4e0e3d4dfd3581ea04dc539f54186,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7f29e9a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f615b222-ee6f-4f2f-b1d7-2f446e04aa86 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:20:20 multinode-824789 crio[2854]: time="2024-04-10 22:20:20.566166691Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a8d19209-5bc6-49a2-abe0-1f5c91b9d5a8 name=/runtime.v1.RuntimeService/Version
	Apr 10 22:20:20 multinode-824789 crio[2854]: time="2024-04-10 22:20:20.566240531Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a8d19209-5bc6-49a2-abe0-1f5c91b9d5a8 name=/runtime.v1.RuntimeService/Version
	Apr 10 22:20:20 multinode-824789 crio[2854]: time="2024-04-10 22:20:20.567879100Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4c203cb6-df8b-4568-b670-028e87541e37 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:20:20 multinode-824789 crio[2854]: time="2024-04-10 22:20:20.568658422Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712787620568633544,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4c203cb6-df8b-4568-b670-028e87541e37 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:20:20 multinode-824789 crio[2854]: time="2024-04-10 22:20:20.569192844Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dc0d1deb-28fb-4687-b14f-4be94e6acfaf name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:20:20 multinode-824789 crio[2854]: time="2024-04-10 22:20:20.569394820Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dc0d1deb-28fb-4687-b14f-4be94e6acfaf name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:20:20 multinode-824789 crio[2854]: time="2024-04-10 22:20:20.569748430Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b69ab1eeb616d179fe5a8784376b2875967c0592e582f61eea38471c80e3e84,PodSandboxId:ab19cca9130746bdf30fe7833dd218299d58facbdcb869c0bfd99da0473bb785,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712787427888182402,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-k2ds9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f84d3580-83d9-497d-bc27-9d1112849093,},Annotations:map[string]string{io.kubernetes.container.hash: c5c43c7e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1452163a5519628d53512c6cdfa710d4393fba40d50434f11f2e79a552f23512,PodSandboxId:c50458e2b81378eb737e89c103b2eb1f14cca493f4e6be985045ad1e173d463f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712787394376623525,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wtnkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7169290a-557c-4861-8ecd-e2a0b2c0b290,},Annotations:map[string]string{io.kubernetes.container.hash: 18c7ea1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95fa99a9b394acfb70cf2d3bc625515f04ac5bbaf1e83ce8ed837895f8ed2711,PodSandboxId:8fc64b964d3c6debfb4197aab2b5454bcbbe22981c74b15086aa8bc000bec36e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712787394236845048,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-q2q8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e335e4d5-f65f-4722-b2c1-60e22cd08383,},Annotations:map[string]string{io.kubernetes.container.hash: 4b782acf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9766617e949fbcca21ed32d98fe5425705562bbf2b80ced264099a5262049093,PodSandboxId:d0bb5e97176be6754b1a25e9382b9af152484d18499d23f2c607e90826f1faf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712787394149865567,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e571cab5-3579-4616-90f8-a9c465e70ace,},An
notations:map[string]string{io.kubernetes.container.hash: c59df8f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153b13801dcbfa0b0df8df6c049f8c0b02d3726f6fca41e1d3375d394d55c529,PodSandboxId:a4727b3c277b2644be6addc77a5f5ee7f174daf03a364aed4120671cf62f5e3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712787394141758693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jczhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bc151d6-2081-4f28-80d9-f5bbc795697e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 47cf10f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:539e39c1eb16e404b9f016c66bfa0a50882f7a3f450a45b5430e466e766c4d1a,PodSandboxId:7657c2ef27ad1e2c2c39ceadc957c9aa5b99c3b4931db10099ff33156b8b02d4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712787390419753041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b6548b0f76d3607d58faa9b3e608948,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f09dfe1ad20f92ced33fc247582ae9805c5208dcfdbbb61996b36c12d765d0f9,PodSandboxId:2629bdf637ddfa1fcc4a0230b1b72bc7b8f3ac51064234c275de75f54c098810,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712787390369164206,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800bfb120fc35f1c411b49e7bd24fc4,},Annotations:map[string]string{io.kub
ernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9024d499796903211265b58e900d4530ff4d8f95c482563d1fc88b6a568e3909,PodSandboxId:cd1042b2912ee16daf10d32a2b4062812d624599cc0367c7d719e0a669e27a52,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712787390378869061,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12b4e0e3d4dfd3581ea04dc539f54186,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7f29e9a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e61f191e2e49cdf6315e1e237ffb6d7db9738e9a42cf5ba7ee189377861f57,PodSandboxId:cb438d7c83c336ca9de1cca90cabe7562df21c8e04b46646ba3dc228e6c75c27,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712787390348573076,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b2c1d24c176a5f0fdc05076676f83e4,},Annotations:map[string]string{io.kubernetes.container.hash: 54acc59a,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7fb0eb1db503999354fe6d2250ddc1eb8b4a81807d10d1f2074ee34c0f60b7,PodSandboxId:d4bf1d7c408122c41e63d9e84a828c448900e752841e54344d17667ed7cbc693,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712787086478490953,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-k2ds9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f84d3580-83d9-497d-bc27-9d1112849093,},Annotations:map[string]string{io.kubernetes.container.hash: c5c43c7e,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:559d8ae61200e3ba5d2a71f3c2058d4f2b1af0bedb839a2a8271d366e75a24fa,PodSandboxId:97fdf93300610640a55f8d9f26679f11ab52b106d47a3c55393773a007cbdbce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712787040069753134,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e571cab5-3579-4616-90f8-a9c465e70ace,},Annotations:map[string]string{io.kubernetes.container.hash: c59df8f7,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7dc29ebd6ee440c1b6ce074966c0ed7caf78253e19d0f5232e7950c151fc4ce,PodSandboxId:8bbc7f26b3f24c6861865c8af008cf9dd186259b174c0903892d9620a11fa194,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712787039290453019,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-q2q8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e335e4d5-f65f-4722-b2c1-60e22cd08383,},Annotations:map[string]string{io.kubernetes.container.hash: 4b782acf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b912245ff19931ce5c9e4bc61ff0b004c72ae768f856684259bfb4db2fb768b,PodSandboxId:9a40c2487b0b89099cb2ad8a18821f8a5444c70f640b534d814587812abcbc1e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712787037548947345,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wtnkq,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 7169290a-557c-4861-8ecd-e2a0b2c0b290,},Annotations:map[string]string{io.kubernetes.container.hash: 18c7ea1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0d4dd9273967b484f51f92f7e12ada0b0ca391d33518f7cdc8a9ded534e23c,PodSandboxId:a4899072a08fffc872259437814850aecb1970fc1a147fae53163f8bc6ae6e65,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712787037388503224,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jczhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bc151d6-2081-4f28-80d9
-f5bbc795697e,},Annotations:map[string]string{io.kubernetes.container.hash: 47cf10f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2541b56a95637bf11f9ee6531105e8bdfd19cae8b2b4e1c40a718f501ba9904a,PodSandboxId:c70ffd4456f7d99f07f71ea04094fe095d660779f4ccd28c1a07809a6301fd5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712787018157395824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800bfb
120fc35f1c411b49e7bd24fc4,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33e5663b850f38138c30aad20177833de666a6e18c80807aeffee6fa023c2660,PodSandboxId:56ccb96fb9f1e89f8fb395d97342853122bd17c5680301c02a718eeb92d6f288,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712787018142947435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b2c1d24c176a5f0fdc05076676f83e4,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 54acc59a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf4abb7ad40ea21bcf22323179f9b10801504264e307d41d755d3e5f8b8e5e9,PodSandboxId:e55cc501e3962a8f290d13b7b6163694efdf66a722479c70c66c3181349addbd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712787018144622016,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b6548b0f76d3607d58faa9b3e608948,},Annotations:map[string]string{io
.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8486ace19c17159f5709414d67c7af1364c7de18561079ec6a14f197eb911515,PodSandboxId:136bc181084da378703a45f321e6e2b0e3b00a5fa6706a8f58b77d519a60fd70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712787018134560832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12b4e0e3d4dfd3581ea04dc539f54186,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7f29e9a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dc0d1deb-28fb-4687-b14f-4be94e6acfaf name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:20:20 multinode-824789 crio[2854]: time="2024-04-10 22:20:20.612977314Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a43066c5-0a72-496a-89d2-dc20b79c6cfa name=/runtime.v1.RuntimeService/Version
	Apr 10 22:20:20 multinode-824789 crio[2854]: time="2024-04-10 22:20:20.613154852Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a43066c5-0a72-496a-89d2-dc20b79c6cfa name=/runtime.v1.RuntimeService/Version
	Apr 10 22:20:20 multinode-824789 crio[2854]: time="2024-04-10 22:20:20.616206649Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8f864005-b61a-458d-9c5f-f4825655147c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:20:20 multinode-824789 crio[2854]: time="2024-04-10 22:20:20.616619830Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712787620616597019,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8f864005-b61a-458d-9c5f-f4825655147c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:20:20 multinode-824789 crio[2854]: time="2024-04-10 22:20:20.618899877Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=09696120-d42c-45d2-aea8-69cce5b10f90 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:20:20 multinode-824789 crio[2854]: time="2024-04-10 22:20:20.618988770Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=09696120-d42c-45d2-aea8-69cce5b10f90 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:20:20 multinode-824789 crio[2854]: time="2024-04-10 22:20:20.621928569Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b69ab1eeb616d179fe5a8784376b2875967c0592e582f61eea38471c80e3e84,PodSandboxId:ab19cca9130746bdf30fe7833dd218299d58facbdcb869c0bfd99da0473bb785,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712787427888182402,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-k2ds9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f84d3580-83d9-497d-bc27-9d1112849093,},Annotations:map[string]string{io.kubernetes.container.hash: c5c43c7e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1452163a5519628d53512c6cdfa710d4393fba40d50434f11f2e79a552f23512,PodSandboxId:c50458e2b81378eb737e89c103b2eb1f14cca493f4e6be985045ad1e173d463f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712787394376623525,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wtnkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7169290a-557c-4861-8ecd-e2a0b2c0b290,},Annotations:map[string]string{io.kubernetes.container.hash: 18c7ea1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95fa99a9b394acfb70cf2d3bc625515f04ac5bbaf1e83ce8ed837895f8ed2711,PodSandboxId:8fc64b964d3c6debfb4197aab2b5454bcbbe22981c74b15086aa8bc000bec36e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712787394236845048,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-q2q8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e335e4d5-f65f-4722-b2c1-60e22cd08383,},Annotations:map[string]string{io.kubernetes.container.hash: 4b782acf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9766617e949fbcca21ed32d98fe5425705562bbf2b80ced264099a5262049093,PodSandboxId:d0bb5e97176be6754b1a25e9382b9af152484d18499d23f2c607e90826f1faf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712787394149865567,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e571cab5-3579-4616-90f8-a9c465e70ace,},An
notations:map[string]string{io.kubernetes.container.hash: c59df8f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153b13801dcbfa0b0df8df6c049f8c0b02d3726f6fca41e1d3375d394d55c529,PodSandboxId:a4727b3c277b2644be6addc77a5f5ee7f174daf03a364aed4120671cf62f5e3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712787394141758693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jczhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bc151d6-2081-4f28-80d9-f5bbc795697e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 47cf10f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:539e39c1eb16e404b9f016c66bfa0a50882f7a3f450a45b5430e466e766c4d1a,PodSandboxId:7657c2ef27ad1e2c2c39ceadc957c9aa5b99c3b4931db10099ff33156b8b02d4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712787390419753041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b6548b0f76d3607d58faa9b3e608948,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f09dfe1ad20f92ced33fc247582ae9805c5208dcfdbbb61996b36c12d765d0f9,PodSandboxId:2629bdf637ddfa1fcc4a0230b1b72bc7b8f3ac51064234c275de75f54c098810,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712787390369164206,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800bfb120fc35f1c411b49e7bd24fc4,},Annotations:map[string]string{io.kub
ernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9024d499796903211265b58e900d4530ff4d8f95c482563d1fc88b6a568e3909,PodSandboxId:cd1042b2912ee16daf10d32a2b4062812d624599cc0367c7d719e0a669e27a52,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712787390378869061,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12b4e0e3d4dfd3581ea04dc539f54186,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7f29e9a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e61f191e2e49cdf6315e1e237ffb6d7db9738e9a42cf5ba7ee189377861f57,PodSandboxId:cb438d7c83c336ca9de1cca90cabe7562df21c8e04b46646ba3dc228e6c75c27,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712787390348573076,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b2c1d24c176a5f0fdc05076676f83e4,},Annotations:map[string]string{io.kubernetes.container.hash: 54acc59a,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7fb0eb1db503999354fe6d2250ddc1eb8b4a81807d10d1f2074ee34c0f60b7,PodSandboxId:d4bf1d7c408122c41e63d9e84a828c448900e752841e54344d17667ed7cbc693,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712787086478490953,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-k2ds9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f84d3580-83d9-497d-bc27-9d1112849093,},Annotations:map[string]string{io.kubernetes.container.hash: c5c43c7e,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:559d8ae61200e3ba5d2a71f3c2058d4f2b1af0bedb839a2a8271d366e75a24fa,PodSandboxId:97fdf93300610640a55f8d9f26679f11ab52b106d47a3c55393773a007cbdbce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712787040069753134,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e571cab5-3579-4616-90f8-a9c465e70ace,},Annotations:map[string]string{io.kubernetes.container.hash: c59df8f7,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7dc29ebd6ee440c1b6ce074966c0ed7caf78253e19d0f5232e7950c151fc4ce,PodSandboxId:8bbc7f26b3f24c6861865c8af008cf9dd186259b174c0903892d9620a11fa194,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712787039290453019,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-q2q8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e335e4d5-f65f-4722-b2c1-60e22cd08383,},Annotations:map[string]string{io.kubernetes.container.hash: 4b782acf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b912245ff19931ce5c9e4bc61ff0b004c72ae768f856684259bfb4db2fb768b,PodSandboxId:9a40c2487b0b89099cb2ad8a18821f8a5444c70f640b534d814587812abcbc1e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712787037548947345,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wtnkq,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 7169290a-557c-4861-8ecd-e2a0b2c0b290,},Annotations:map[string]string{io.kubernetes.container.hash: 18c7ea1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0d4dd9273967b484f51f92f7e12ada0b0ca391d33518f7cdc8a9ded534e23c,PodSandboxId:a4899072a08fffc872259437814850aecb1970fc1a147fae53163f8bc6ae6e65,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712787037388503224,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jczhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bc151d6-2081-4f28-80d9
-f5bbc795697e,},Annotations:map[string]string{io.kubernetes.container.hash: 47cf10f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2541b56a95637bf11f9ee6531105e8bdfd19cae8b2b4e1c40a718f501ba9904a,PodSandboxId:c70ffd4456f7d99f07f71ea04094fe095d660779f4ccd28c1a07809a6301fd5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712787018157395824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800bfb
120fc35f1c411b49e7bd24fc4,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33e5663b850f38138c30aad20177833de666a6e18c80807aeffee6fa023c2660,PodSandboxId:56ccb96fb9f1e89f8fb395d97342853122bd17c5680301c02a718eeb92d6f288,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712787018142947435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b2c1d24c176a5f0fdc05076676f83e4,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 54acc59a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf4abb7ad40ea21bcf22323179f9b10801504264e307d41d755d3e5f8b8e5e9,PodSandboxId:e55cc501e3962a8f290d13b7b6163694efdf66a722479c70c66c3181349addbd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712787018144622016,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b6548b0f76d3607d58faa9b3e608948,},Annotations:map[string]string{io
.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8486ace19c17159f5709414d67c7af1364c7de18561079ec6a14f197eb911515,PodSandboxId:136bc181084da378703a45f321e6e2b0e3b00a5fa6706a8f58b77d519a60fd70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712787018134560832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-824789,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12b4e0e3d4dfd3581ea04dc539f54186,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7f29e9a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=09696120-d42c-45d2-aea8-69cce5b10f90 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0b69ab1eeb616       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   ab19cca913074       busybox-7fdf7869d9-k2ds9
	1452163a55196       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               1                   c50458e2b8137       kindnet-wtnkq
	95fa99a9b394a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   8fc64b964d3c6       coredns-76f75df574-q2q8c
	9766617e949fb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   d0bb5e97176be       storage-provisioner
	153b13801dcbf       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      3 minutes ago       Running             kube-proxy                1                   a4727b3c277b2       kube-proxy-jczhc
	539e39c1eb16e       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      3 minutes ago       Running             kube-scheduler            1                   7657c2ef27ad1       kube-scheduler-multinode-824789
	9024d49979690       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      3 minutes ago       Running             kube-apiserver            1                   cd1042b2912ee       kube-apiserver-multinode-824789
	f09dfe1ad20f9       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      3 minutes ago       Running             kube-controller-manager   1                   2629bdf637ddf       kube-controller-manager-multinode-824789
	34e61f191e2e4       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   cb438d7c83c33       etcd-multinode-824789
	3f7fb0eb1db50       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   d4bf1d7c40812       busybox-7fdf7869d9-k2ds9
	559d8ae61200e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   97fdf93300610       storage-provisioner
	c7dc29ebd6ee4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   0                   8bbc7f26b3f24       coredns-76f75df574-q2q8c
	6b912245ff199       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      9 minutes ago       Exited              kindnet-cni               0                   9a40c2487b0b8       kindnet-wtnkq
	6d0d4dd927396       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      9 minutes ago       Exited              kube-proxy                0                   a4899072a08ff       kube-proxy-jczhc
	2541b56a95637       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      10 minutes ago      Exited              kube-controller-manager   0                   c70ffd4456f7d       kube-controller-manager-multinode-824789
	cbf4abb7ad40e       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      10 minutes ago      Exited              kube-scheduler            0                   e55cc501e3962       kube-scheduler-multinode-824789
	33e5663b850f3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   56ccb96fb9f1e       etcd-multinode-824789
	8486ace19c171       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      10 minutes ago      Exited              kube-apiserver            0                   136bc181084da       kube-apiserver-multinode-824789
	
	
	==> coredns [95fa99a9b394acfb70cf2d3bc625515f04ac5bbaf1e83ce8ed837895f8ed2711] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:34816 - 37372 "HINFO IN 6222664666433173775.8478308336439852750. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012972412s
	
	
	==> coredns [c7dc29ebd6ee440c1b6ce074966c0ed7caf78253e19d0f5232e7950c151fc4ce] <==
	[INFO] 10.244.0.3:38854 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001791375s
	[INFO] 10.244.0.3:42513 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118794s
	[INFO] 10.244.0.3:48278 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000202941s
	[INFO] 10.244.0.3:51443 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001217049s
	[INFO] 10.244.0.3:45968 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000032888s
	[INFO] 10.244.0.3:51559 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000184773s
	[INFO] 10.244.0.3:35257 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000050636s
	[INFO] 10.244.1.2:48719 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160086s
	[INFO] 10.244.1.2:33455 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116449s
	[INFO] 10.244.1.2:47230 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104791s
	[INFO] 10.244.1.2:59959 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008388s
	[INFO] 10.244.0.3:52061 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176839s
	[INFO] 10.244.0.3:33997 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092769s
	[INFO] 10.244.0.3:58215 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007837s
	[INFO] 10.244.0.3:50061 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076373s
	[INFO] 10.244.1.2:55978 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000228426s
	[INFO] 10.244.1.2:50575 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000177548s
	[INFO] 10.244.1.2:39720 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000271686s
	[INFO] 10.244.1.2:45401 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000134787s
	[INFO] 10.244.0.3:45840 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113495s
	[INFO] 10.244.0.3:36486 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000045624s
	[INFO] 10.244.0.3:60591 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071447s
	[INFO] 10.244.0.3:39383 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000057031s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-824789
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-824789
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2
	                    minikube.k8s.io/name=multinode-824789
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_10T22_10_24_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Apr 2024 22:10:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-824789
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Apr 2024 22:20:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Apr 2024 22:16:33 +0000   Wed, 10 Apr 2024 22:10:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Apr 2024 22:16:33 +0000   Wed, 10 Apr 2024 22:10:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Apr 2024 22:16:33 +0000   Wed, 10 Apr 2024 22:10:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Apr 2024 22:16:33 +0000   Wed, 10 Apr 2024 22:10:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.94
	  Hostname:    multinode-824789
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e962647442c84f0e870f4be227995ec1
	  System UUID:                e9626474-42c8-4f0e-870f-4be227995ec1
	  Boot ID:                    951c22ea-9250-4433-b6ed-61a6ed09bb24
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-k2ds9                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m57s
	  kube-system                 coredns-76f75df574-q2q8c                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m44s
	  kube-system                 etcd-multinode-824789                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m56s
	  kube-system                 kindnet-wtnkq                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m44s
	  kube-system                 kube-apiserver-multinode-824789             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m56s
	  kube-system                 kube-controller-manager-multinode-824789    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m56s
	  kube-system                 kube-proxy-jczhc                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m44s
	  kube-system                 kube-scheduler-multinode-824789             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m56s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m42s                  kube-proxy       
	  Normal  Starting                 3m46s                  kube-proxy       
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node multinode-824789 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node multinode-824789 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node multinode-824789 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     9m56s                  kubelet          Node multinode-824789 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  9m56s                  kubelet          Node multinode-824789 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m56s                  kubelet          Node multinode-824789 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  9m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m56s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m45s                  node-controller  Node multinode-824789 event: Registered Node multinode-824789 in Controller
	  Normal  NodeReady                9m42s                  kubelet          Node multinode-824789 status is now: NodeReady
	  Normal  Starting                 3m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m51s (x8 over 3m51s)  kubelet          Node multinode-824789 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m51s (x8 over 3m51s)  kubelet          Node multinode-824789 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m51s (x7 over 3m51s)  kubelet          Node multinode-824789 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m34s                  node-controller  Node multinode-824789 event: Registered Node multinode-824789 in Controller
	
	
	Name:               multinode-824789-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-824789-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2
	                    minikube.k8s.io/name=multinode-824789
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_10T22_17_16_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Apr 2024 22:17:15 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-824789-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Apr 2024 22:17:56 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 10 Apr 2024 22:17:46 +0000   Wed, 10 Apr 2024 22:18:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 10 Apr 2024 22:17:46 +0000   Wed, 10 Apr 2024 22:18:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 10 Apr 2024 22:17:46 +0000   Wed, 10 Apr 2024 22:18:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 10 Apr 2024 22:17:46 +0000   Wed, 10 Apr 2024 22:18:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.85
	  Hostname:    multinode-824789-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a7f81c6e777f43cf97aef7d828e06ed9
	  System UUID:                a7f81c6e-777f-43cf-97ae-f7d828e06ed9
	  Boot ID:                    7bcfe80e-c21d-4735-a54e-8f4150c58e96
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-7p7kp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m9s
	  kube-system                 kindnet-4dcbv               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m8s
	  kube-system                 kube-proxy-qvf7k            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m1s                 kube-proxy       
	  Normal  Starting                 9m4s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  9m9s (x2 over 9m9s)  kubelet          Node multinode-824789-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m9s (x2 over 9m9s)  kubelet          Node multinode-824789-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m9s (x2 over 9m9s)  kubelet          Node multinode-824789-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m59s                kubelet          Node multinode-824789-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m5s (x2 over 3m5s)  kubelet          Node multinode-824789-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m5s (x2 over 3m5s)  kubelet          Node multinode-824789-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m5s (x2 over 3m5s)  kubelet          Node multinode-824789-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m57s                kubelet          Node multinode-824789-m02 status is now: NodeReady
	  Normal  NodeNotReady             104s                 node-controller  Node multinode-824789-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.057339] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059586] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.200265] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.122407] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.287304] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.481215] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +0.062532] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.878711] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +1.197943] kauditd_printk_skb: 77 callbacks suppressed
	[  +5.619447] systemd-fstab-generator[1285]: Ignoring "noauto" option for root device
	[  +0.079883] kauditd_printk_skb: 10 callbacks suppressed
	[ +13.073193] systemd-fstab-generator[1475]: Ignoring "noauto" option for root device
	[  +0.115244] kauditd_printk_skb: 21 callbacks suppressed
	[Apr10 22:11] kauditd_printk_skb: 84 callbacks suppressed
	[Apr10 22:16] systemd-fstab-generator[2772]: Ignoring "noauto" option for root device
	[  +0.153212] systemd-fstab-generator[2784]: Ignoring "noauto" option for root device
	[  +0.181027] systemd-fstab-generator[2799]: Ignoring "noauto" option for root device
	[  +0.158222] systemd-fstab-generator[2811]: Ignoring "noauto" option for root device
	[  +0.286379] systemd-fstab-generator[2839]: Ignoring "noauto" option for root device
	[  +0.772835] systemd-fstab-generator[2940]: Ignoring "noauto" option for root device
	[  +1.897747] systemd-fstab-generator[3065]: Ignoring "noauto" option for root device
	[  +4.677814] kauditd_printk_skb: 184 callbacks suppressed
	[ +12.598612] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.015694] systemd-fstab-generator[3883]: Ignoring "noauto" option for root device
	[Apr10 22:17] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [33e5663b850f38138c30aad20177833de666a6e18c80807aeffee6fa023c2660] <==
	{"level":"info","ts":"2024-04-10T22:11:12.164374Z","caller":"traceutil/trace.go:171","msg":"trace[905070006] linearizableReadLoop","detail":"{readStateIndex:490; appliedIndex:489; }","duration":"249.037662ms","start":"2024-04-10T22:11:11.915327Z","end":"2024-04-10T22:11:12.164365Z","steps":["trace[905070006] 'read index received'  (duration: 242.951588ms)","trace[905070006] 'applied index is now lower than readState.Index'  (duration: 6.085398ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-10T22:11:12.164557Z","caller":"traceutil/trace.go:171","msg":"trace[1671387732] transaction","detail":"{read_only:false; response_revision:474; number_of_response:1; }","duration":"246.140207ms","start":"2024-04-10T22:11:11.91841Z","end":"2024-04-10T22:11:12.16455Z","steps":["trace[1671387732] 'process raft request'  (duration: 245.262977ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-10T22:11:12.164854Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"249.45907ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/multinode-824789-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-10T22:11:12.164915Z","caller":"traceutil/trace.go:171","msg":"trace[1285588162] range","detail":"{range_begin:/registry/csinodes/multinode-824789-m02; range_end:; response_count:0; response_revision:475; }","duration":"249.604557ms","start":"2024-04-10T22:11:11.915304Z","end":"2024-04-10T22:11:12.164908Z","steps":["trace[1285588162] 'agreement among raft nodes before linearized reading'  (duration: 249.465521ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-10T22:11:12.165108Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"234.81623ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-10T22:11:12.165158Z","caller":"traceutil/trace.go:171","msg":"trace[1109851085] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:475; }","duration":"234.891548ms","start":"2024-04-10T22:11:11.930258Z","end":"2024-04-10T22:11:12.16515Z","steps":["trace[1109851085] 'agreement among raft nodes before linearized reading'  (duration: 234.824663ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-10T22:11:12.165393Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.073214ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-node-lease/multinode-824789-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-10T22:11:12.167155Z","caller":"traceutil/trace.go:171","msg":"trace[1742958869] range","detail":"{range_begin:/registry/leases/kube-node-lease/multinode-824789-m02; range_end:; response_count:0; response_revision:475; }","duration":"103.854468ms","start":"2024-04-10T22:11:12.063291Z","end":"2024-04-10T22:11:12.167146Z","steps":["trace[1742958869] 'agreement among raft nodes before linearized reading'  (duration: 102.07959ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-10T22:11:13.567811Z","caller":"traceutil/trace.go:171","msg":"trace[2079496901] transaction","detail":"{read_only:false; response_revision:502; number_of_response:1; }","duration":"237.282322ms","start":"2024-04-10T22:11:13.330512Z","end":"2024-04-10T22:11:13.567794Z","steps":["trace[2079496901] 'process raft request'  (duration: 237.087385ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-10T22:11:58.662778Z","caller":"traceutil/trace.go:171","msg":"trace[2059953688] linearizableReadLoop","detail":"{readStateIndex:634; appliedIndex:632; }","duration":"187.507947ms","start":"2024-04-10T22:11:58.475253Z","end":"2024-04-10T22:11:58.662761Z","steps":["trace[2059953688] 'read index received'  (duration: 186.643266ms)","trace[2059953688] 'applied index is now lower than readState.Index'  (duration: 863.895µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-10T22:11:58.662871Z","caller":"traceutil/trace.go:171","msg":"trace[1302572631] transaction","detail":"{read_only:false; response_revision:603; number_of_response:1; }","duration":"187.787877ms","start":"2024-04-10T22:11:58.475077Z","end":"2024-04-10T22:11:58.662865Z","steps":["trace[1302572631] 'process raft request'  (duration: 187.631288ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-10T22:11:58.663094Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.773588ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/multinode-824789-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-10T22:11:58.663141Z","caller":"traceutil/trace.go:171","msg":"trace[133383368] range","detail":"{range_begin:/registry/csinodes/multinode-824789-m03; range_end:; response_count:0; response_revision:603; }","duration":"187.899202ms","start":"2024-04-10T22:11:58.475232Z","end":"2024-04-10T22:11:58.663132Z","steps":["trace[133383368] 'agreement among raft nodes before linearized reading'  (duration: 187.754386ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-10T22:11:58.662783Z","caller":"traceutil/trace.go:171","msg":"trace[234368257] transaction","detail":"{read_only:false; response_revision:602; number_of_response:1; }","duration":"221.431262ms","start":"2024-04-10T22:11:58.441338Z","end":"2024-04-10T22:11:58.66277Z","steps":["trace[234368257] 'process raft request'  (duration: 220.593512ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-10T22:14:54.569604Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-10T22:14:54.569737Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-824789","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.94:2380"],"advertise-client-urls":["https://192.168.39.94:2379"]}
	{"level":"warn","ts":"2024-04-10T22:14:54.590444Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-10T22:14:54.590661Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/04/10 22:14:54 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-10T22:14:54.646648Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.94:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-10T22:14:54.64683Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.94:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-10T22:14:54.646963Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c23cd90330b5fc4f","current-leader-member-id":"c23cd90330b5fc4f"}
	{"level":"info","ts":"2024-04-10T22:14:54.650269Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.94:2380"}
	{"level":"info","ts":"2024-04-10T22:14:54.650621Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.94:2380"}
	{"level":"info","ts":"2024-04-10T22:14:54.650724Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-824789","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.94:2380"],"advertise-client-urls":["https://192.168.39.94:2379"]}
	
	
	==> etcd [34e61f191e2e49cdf6315e1e237ffb6d7db9738e9a42cf5ba7ee189377861f57] <==
	{"level":"info","ts":"2024-04-10T22:16:30.87528Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-10T22:16:30.875307Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-10T22:16:30.875555Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c23cd90330b5fc4f switched to configuration voters=(13996300349686021199)"}
	{"level":"info","ts":"2024-04-10T22:16:30.875643Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f81fab91992620a9","local-member-id":"c23cd90330b5fc4f","added-peer-id":"c23cd90330b5fc4f","added-peer-peer-urls":["https://192.168.39.94:2380"]}
	{"level":"info","ts":"2024-04-10T22:16:30.875775Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f81fab91992620a9","local-member-id":"c23cd90330b5fc4f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-10T22:16:30.875818Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-10T22:16:30.893381Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-10T22:16:30.893605Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"c23cd90330b5fc4f","initial-advertise-peer-urls":["https://192.168.39.94:2380"],"listen-peer-urls":["https://192.168.39.94:2380"],"advertise-client-urls":["https://192.168.39.94:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.94:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-10T22:16:30.893651Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-10T22:16:30.89379Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.94:2380"}
	{"level":"info","ts":"2024-04-10T22:16:30.89382Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.94:2380"}
	{"level":"info","ts":"2024-04-10T22:16:31.904649Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c23cd90330b5fc4f is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-10T22:16:31.904833Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c23cd90330b5fc4f became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-10T22:16:31.904887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c23cd90330b5fc4f received MsgPreVoteResp from c23cd90330b5fc4f at term 2"}
	{"level":"info","ts":"2024-04-10T22:16:31.905147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c23cd90330b5fc4f became candidate at term 3"}
	{"level":"info","ts":"2024-04-10T22:16:31.905182Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c23cd90330b5fc4f received MsgVoteResp from c23cd90330b5fc4f at term 3"}
	{"level":"info","ts":"2024-04-10T22:16:31.9052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c23cd90330b5fc4f became leader at term 3"}
	{"level":"info","ts":"2024-04-10T22:16:31.905211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c23cd90330b5fc4f elected leader c23cd90330b5fc4f at term 3"}
	{"level":"info","ts":"2024-04-10T22:16:31.913193Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-10T22:16:31.915513Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.94:2379"}
	{"level":"info","ts":"2024-04-10T22:16:31.91588Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-10T22:16:31.917609Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-10T22:16:31.920101Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c23cd90330b5fc4f","local-member-attributes":"{Name:multinode-824789 ClientURLs:[https://192.168.39.94:2379]}","request-path":"/0/members/c23cd90330b5fc4f/attributes","cluster-id":"f81fab91992620a9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-10T22:16:31.920317Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-10T22:16:31.920351Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 22:20:21 up 10 min,  0 users,  load average: 0.08, 0.18, 0.15
	Linux multinode-824789 5.10.207 #1 SMP Wed Apr 10 14:57:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1452163a5519628d53512c6cdfa710d4393fba40d50434f11f2e79a552f23512] <==
	I0410 22:19:15.432841       1 main.go:250] Node multinode-824789-m02 has CIDR [10.244.1.0/24] 
	I0410 22:19:25.445397       1 main.go:223] Handling node with IPs: map[192.168.39.94:{}]
	I0410 22:19:25.445554       1 main.go:227] handling current node
	I0410 22:19:25.445586       1 main.go:223] Handling node with IPs: map[192.168.39.85:{}]
	I0410 22:19:25.445612       1 main.go:250] Node multinode-824789-m02 has CIDR [10.244.1.0/24] 
	I0410 22:19:35.451628       1 main.go:223] Handling node with IPs: map[192.168.39.94:{}]
	I0410 22:19:35.486236       1 main.go:227] handling current node
	I0410 22:19:35.486368       1 main.go:223] Handling node with IPs: map[192.168.39.85:{}]
	I0410 22:19:35.486411       1 main.go:250] Node multinode-824789-m02 has CIDR [10.244.1.0/24] 
	I0410 22:19:45.498022       1 main.go:223] Handling node with IPs: map[192.168.39.94:{}]
	I0410 22:19:45.498263       1 main.go:227] handling current node
	I0410 22:19:45.498375       1 main.go:223] Handling node with IPs: map[192.168.39.85:{}]
	I0410 22:19:45.498406       1 main.go:250] Node multinode-824789-m02 has CIDR [10.244.1.0/24] 
	I0410 22:19:55.503960       1 main.go:223] Handling node with IPs: map[192.168.39.94:{}]
	I0410 22:19:55.504190       1 main.go:227] handling current node
	I0410 22:19:55.504268       1 main.go:223] Handling node with IPs: map[192.168.39.85:{}]
	I0410 22:19:55.504304       1 main.go:250] Node multinode-824789-m02 has CIDR [10.244.1.0/24] 
	I0410 22:20:05.510147       1 main.go:223] Handling node with IPs: map[192.168.39.94:{}]
	I0410 22:20:05.510257       1 main.go:227] handling current node
	I0410 22:20:05.510285       1 main.go:223] Handling node with IPs: map[192.168.39.85:{}]
	I0410 22:20:05.510302       1 main.go:250] Node multinode-824789-m02 has CIDR [10.244.1.0/24] 
	I0410 22:20:15.516818       1 main.go:223] Handling node with IPs: map[192.168.39.94:{}]
	I0410 22:20:15.516920       1 main.go:227] handling current node
	I0410 22:20:15.516940       1 main.go:223] Handling node with IPs: map[192.168.39.85:{}]
	I0410 22:20:15.516950       1 main.go:250] Node multinode-824789-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [6b912245ff19931ce5c9e4bc61ff0b004c72ae768f856684259bfb4db2fb768b] <==
	I0410 22:14:08.818154       1 main.go:250] Node multinode-824789-m03 has CIDR [10.244.3.0/24] 
	I0410 22:14:18.830724       1 main.go:223] Handling node with IPs: map[192.168.39.94:{}]
	I0410 22:14:18.830779       1 main.go:227] handling current node
	I0410 22:14:18.830798       1 main.go:223] Handling node with IPs: map[192.168.39.85:{}]
	I0410 22:14:18.830806       1 main.go:250] Node multinode-824789-m02 has CIDR [10.244.1.0/24] 
	I0410 22:14:18.830990       1 main.go:223] Handling node with IPs: map[192.168.39.224:{}]
	I0410 22:14:18.831019       1 main.go:250] Node multinode-824789-m03 has CIDR [10.244.3.0/24] 
	I0410 22:14:28.838361       1 main.go:223] Handling node with IPs: map[192.168.39.94:{}]
	I0410 22:14:28.838494       1 main.go:227] handling current node
	I0410 22:14:28.838525       1 main.go:223] Handling node with IPs: map[192.168.39.85:{}]
	I0410 22:14:28.838558       1 main.go:250] Node multinode-824789-m02 has CIDR [10.244.1.0/24] 
	I0410 22:14:28.838727       1 main.go:223] Handling node with IPs: map[192.168.39.224:{}]
	I0410 22:14:28.838784       1 main.go:250] Node multinode-824789-m03 has CIDR [10.244.3.0/24] 
	I0410 22:14:38.848135       1 main.go:223] Handling node with IPs: map[192.168.39.94:{}]
	I0410 22:14:38.848185       1 main.go:227] handling current node
	I0410 22:14:38.848197       1 main.go:223] Handling node with IPs: map[192.168.39.85:{}]
	I0410 22:14:38.848208       1 main.go:250] Node multinode-824789-m02 has CIDR [10.244.1.0/24] 
	I0410 22:14:38.848337       1 main.go:223] Handling node with IPs: map[192.168.39.224:{}]
	I0410 22:14:38.848368       1 main.go:250] Node multinode-824789-m03 has CIDR [10.244.3.0/24] 
	I0410 22:14:48.853864       1 main.go:223] Handling node with IPs: map[192.168.39.94:{}]
	I0410 22:14:48.853996       1 main.go:227] handling current node
	I0410 22:14:48.854024       1 main.go:223] Handling node with IPs: map[192.168.39.85:{}]
	I0410 22:14:48.854160       1 main.go:250] Node multinode-824789-m02 has CIDR [10.244.1.0/24] 
	I0410 22:14:48.854327       1 main.go:223] Handling node with IPs: map[192.168.39.224:{}]
	I0410 22:14:48.854384       1 main.go:250] Node multinode-824789-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [8486ace19c17159f5709414d67c7af1364c7de18561079ec6a14f197eb911515] <==
	I0410 22:10:20.836467       1 shared_informer.go:318] Caches are synced for configmaps
	I0410 22:10:20.836627       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0410 22:10:20.836668       1 aggregator.go:165] initial CRD sync complete...
	I0410 22:10:20.836675       1 autoregister_controller.go:141] Starting autoregister controller
	I0410 22:10:20.836679       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0410 22:10:20.836683       1 cache.go:39] Caches are synced for autoregister controller
	I0410 22:10:20.840529       1 controller.go:624] quota admission added evaluator for: namespaces
	I0410 22:10:20.875557       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0410 22:10:21.729835       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0410 22:10:21.734973       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0410 22:10:21.735115       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0410 22:10:22.439993       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0410 22:10:22.504340       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0410 22:10:22.592651       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0410 22:10:22.612997       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.94]
	I0410 22:10:22.615314       1 controller.go:624] quota admission added evaluator for: endpoints
	I0410 22:10:22.622733       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0410 22:10:22.784336       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0410 22:10:24.001414       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0410 22:10:24.021868       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0410 22:10:24.042644       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0410 22:10:36.540575       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0410 22:10:36.690647       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0410 22:14:54.566379       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0410 22:14:54.598113       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	
	
	==> kube-apiserver [9024d499796903211265b58e900d4530ff4d8f95c482563d1fc88b6a568e3909] <==
	I0410 22:16:33.293499       1 establishing_controller.go:76] Starting EstablishingController
	I0410 22:16:33.293516       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0410 22:16:33.293548       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0410 22:16:33.293563       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0410 22:16:33.370557       1 shared_informer.go:318] Caches are synced for configmaps
	I0410 22:16:33.372474       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0410 22:16:33.373760       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0410 22:16:33.384783       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0410 22:16:33.385018       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0410 22:16:33.385106       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0410 22:16:33.390797       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0410 22:16:33.392274       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0410 22:16:33.392457       1 aggregator.go:165] initial CRD sync complete...
	I0410 22:16:33.392490       1 autoregister_controller.go:141] Starting autoregister controller
	I0410 22:16:33.392513       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0410 22:16:33.392537       1 cache.go:39] Caches are synced for autoregister controller
	I0410 22:16:33.432167       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0410 22:16:34.312789       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0410 22:16:35.634781       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0410 22:16:35.776845       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0410 22:16:35.789982       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0410 22:16:35.868402       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0410 22:16:35.875927       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0410 22:16:46.558732       1 controller.go:624] quota admission added evaluator for: endpoints
	I0410 22:16:46.608948       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2541b56a95637bf11f9ee6531105e8bdfd19cae8b2b4e1c40a718f501ba9904a] <==
	I0410 22:11:27.442361       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="5.477167ms"
	I0410 22:11:27.443224       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="54.389µs"
	I0410 22:11:58.672275       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-824789-m03\" does not exist"
	I0410 22:11:58.672756       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-824789-m02"
	I0410 22:11:58.688883       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-824789-m03" podCIDRs=["10.244.2.0/24"]
	I0410 22:11:58.704396       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-jtd5w"
	I0410 22:11:58.710514       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rwtsd"
	I0410 22:12:00.855131       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-824789-m03"
	I0410 22:12:00.855230       1 event.go:376] "Event occurred" object="multinode-824789-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-824789-m03 event: Registered Node multinode-824789-m03 in Controller"
	I0410 22:12:08.322618       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-824789-m02"
	I0410 22:12:38.442685       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-824789-m02"
	I0410 22:12:39.546615       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-824789-m03\" does not exist"
	I0410 22:12:39.547394       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-824789-m02"
	I0410 22:12:39.557461       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-824789-m03" podCIDRs=["10.244.3.0/24"]
	I0410 22:12:48.751509       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-824789-m03"
	I0410 22:13:25.911256       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-824789-m03"
	I0410 22:13:25.912348       1 event.go:376] "Event occurred" object="multinode-824789-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-824789-m02 status is now: NodeNotReady"
	I0410 22:13:25.935015       1 event.go:376] "Event occurred" object="kube-system/kindnet-4dcbv" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0410 22:13:25.959584       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-qvf7k" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0410 22:13:25.983370       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-6cmbq" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0410 22:13:25.997076       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="14.515619ms"
	I0410 22:13:25.997208       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="68.025µs"
	I0410 22:13:30.996264       1 event.go:376] "Event occurred" object="multinode-824789-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-824789-m03 status is now: NodeNotReady"
	I0410 22:13:31.009130       1 event.go:376] "Event occurred" object="kube-system/kindnet-rwtsd" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0410 22:13:31.021654       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-jtd5w" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-controller-manager [f09dfe1ad20f92ced33fc247582ae9805c5208dcfdbbb61996b36c12d765d0f9] <==
	I0410 22:17:16.534885       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="51.542µs"
	I0410 22:17:16.535152       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-6cmbq" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-6cmbq"
	I0410 22:17:23.728415       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-824789-m02"
	I0410 22:17:23.754222       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="49.545µs"
	I0410 22:17:23.778191       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="70.426µs"
	I0410 22:17:26.383204       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-7p7kp" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-7p7kp"
	I0410 22:17:26.671200       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="7.221062ms"
	I0410 22:17:26.672927       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="322.636µs"
	I0410 22:17:42.955885       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-824789-m02"
	I0410 22:17:44.190865       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-824789-m02"
	I0410 22:17:44.191911       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-824789-m03\" does not exist"
	I0410 22:17:44.205339       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-824789-m03" podCIDRs=["10.244.2.0/24"]
	I0410 22:17:53.243949       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-824789-m02"
	I0410 22:17:59.084275       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-824789-m02"
	I0410 22:18:01.401297       1 event.go:376] "Event occurred" object="multinode-824789-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-824789-m03 event: Removing Node multinode-824789-m03 from Controller"
	I0410 22:18:36.421234       1 event.go:376] "Event occurred" object="multinode-824789-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-824789-m02 status is now: NodeNotReady"
	I0410 22:18:36.443522       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-7p7kp" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0410 22:18:36.467860       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="26.063392ms"
	I0410 22:18:36.468305       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="60.236µs"
	I0410 22:18:36.469428       1 event.go:376] "Event occurred" object="kube-system/kindnet-4dcbv" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0410 22:18:36.485107       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-qvf7k" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0410 22:19:06.369645       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kindnet-rwtsd"
	I0410 22:19:06.396856       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-rwtsd"
	I0410 22:19:06.396900       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-jtd5w"
	I0410 22:19:06.419476       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-jtd5w"
	
	
	==> kube-proxy [153b13801dcbfa0b0df8df6c049f8c0b02d3726f6fca41e1d3375d394d55c529] <==
	I0410 22:16:34.455156       1 server_others.go:72] "Using iptables proxy"
	I0410 22:16:34.474717       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.94"]
	I0410 22:16:34.553287       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0410 22:16:34.553371       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0410 22:16:34.553396       1 server_others.go:168] "Using iptables Proxier"
	I0410 22:16:34.560640       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0410 22:16:34.560863       1 server.go:865] "Version info" version="v1.29.3"
	I0410 22:16:34.560896       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0410 22:16:34.562367       1 config.go:188] "Starting service config controller"
	I0410 22:16:34.562437       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0410 22:16:34.562470       1 config.go:97] "Starting endpoint slice config controller"
	I0410 22:16:34.562495       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0410 22:16:34.563215       1 config.go:315] "Starting node config controller"
	I0410 22:16:34.563242       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0410 22:16:34.663455       1 shared_informer.go:318] Caches are synced for node config
	I0410 22:16:34.663502       1 shared_informer.go:318] Caches are synced for service config
	I0410 22:16:34.663523       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [6d0d4dd9273967b484f51f92f7e12ada0b0ca391d33518f7cdc8a9ded534e23c] <==
	I0410 22:10:38.022399       1 server_others.go:72] "Using iptables proxy"
	I0410 22:10:38.077411       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.94"]
	I0410 22:10:38.153401       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0410 22:10:38.153422       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0410 22:10:38.153433       1 server_others.go:168] "Using iptables Proxier"
	I0410 22:10:38.157017       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0410 22:10:38.157276       1 server.go:865] "Version info" version="v1.29.3"
	I0410 22:10:38.157288       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0410 22:10:38.161261       1 config.go:188] "Starting service config controller"
	I0410 22:10:38.161489       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0410 22:10:38.161539       1 config.go:97] "Starting endpoint slice config controller"
	I0410 22:10:38.161557       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0410 22:10:38.163177       1 config.go:315] "Starting node config controller"
	I0410 22:10:38.163212       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0410 22:10:38.262573       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0410 22:10:38.262642       1 shared_informer.go:318] Caches are synced for service config
	I0410 22:10:38.265878       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [539e39c1eb16e404b9f016c66bfa0a50882f7a3f450a45b5430e466e766c4d1a] <==
	I0410 22:16:31.601707       1 serving.go:380] Generated self-signed cert in-memory
	W0410 22:16:33.325461       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0410 22:16:33.325503       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0410 22:16:33.325513       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0410 22:16:33.325519       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0410 22:16:33.385554       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0410 22:16:33.387148       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0410 22:16:33.390454       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0410 22:16:33.391166       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0410 22:16:33.391610       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0410 22:16:33.394236       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0410 22:16:33.492111       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [cbf4abb7ad40ea21bcf22323179f9b10801504264e307d41d755d3e5f8b8e5e9] <==
	W0410 22:10:21.692733       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0410 22:10:21.692793       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0410 22:10:21.706469       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0410 22:10:21.706527       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0410 22:10:21.760547       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0410 22:10:21.760607       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0410 22:10:21.844181       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0410 22:10:21.844235       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0410 22:10:21.874811       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0410 22:10:21.875479       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0410 22:10:21.881409       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0410 22:10:21.881513       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0410 22:10:21.905237       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0410 22:10:21.905719       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0410 22:10:22.009228       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0410 22:10:22.009972       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0410 22:10:22.047858       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0410 22:10:22.048623       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0410 22:10:22.192264       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0410 22:10:22.192322       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0410 22:10:24.037077       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0410 22:14:54.566943       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0410 22:14:54.567239       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0410 22:14:54.567501       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0410 22:14:54.589885       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 10 22:18:29 multinode-824789 kubelet[3072]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 10 22:18:29 multinode-824789 kubelet[3072]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 10 22:18:29 multinode-824789 kubelet[3072]: E0410 22:18:29.691422    3072 manager.go:1116] Failed to create existing container: /kubepods/pod7169290a-557c-4861-8ecd-e2a0b2c0b290/crio-9a40c2487b0b89099cb2ad8a18821f8a5444c70f640b534d814587812abcbc1e: Error finding container 9a40c2487b0b89099cb2ad8a18821f8a5444c70f640b534d814587812abcbc1e: Status 404 returned error can't find the container with id 9a40c2487b0b89099cb2ad8a18821f8a5444c70f640b534d814587812abcbc1e
	Apr 10 22:18:29 multinode-824789 kubelet[3072]: E0410 22:18:29.691761    3072 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod6bc151d6-2081-4f28-80d9-f5bbc795697e/crio-a4899072a08fffc872259437814850aecb1970fc1a147fae53163f8bc6ae6e65: Error finding container a4899072a08fffc872259437814850aecb1970fc1a147fae53163f8bc6ae6e65: Status 404 returned error can't find the container with id a4899072a08fffc872259437814850aecb1970fc1a147fae53163f8bc6ae6e65
	Apr 10 22:18:29 multinode-824789 kubelet[3072]: E0410 22:18:29.691988    3072 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod12b4e0e3d4dfd3581ea04dc539f54186/crio-136bc181084da378703a45f321e6e2b0e3b00a5fa6706a8f58b77d519a60fd70: Error finding container 136bc181084da378703a45f321e6e2b0e3b00a5fa6706a8f58b77d519a60fd70: Status 404 returned error can't find the container with id 136bc181084da378703a45f321e6e2b0e3b00a5fa6706a8f58b77d519a60fd70
	Apr 10 22:18:29 multinode-824789 kubelet[3072]: E0410 22:18:29.692266    3072 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pode571cab5-3579-4616-90f8-a9c465e70ace/crio-97fdf93300610640a55f8d9f26679f11ab52b106d47a3c55393773a007cbdbce: Error finding container 97fdf93300610640a55f8d9f26679f11ab52b106d47a3c55393773a007cbdbce: Status 404 returned error can't find the container with id 97fdf93300610640a55f8d9f26679f11ab52b106d47a3c55393773a007cbdbce
	Apr 10 22:18:29 multinode-824789 kubelet[3072]: E0410 22:18:29.692434    3072 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podf84d3580-83d9-497d-bc27-9d1112849093/crio-d4bf1d7c408122c41e63d9e84a828c448900e752841e54344d17667ed7cbc693: Error finding container d4bf1d7c408122c41e63d9e84a828c448900e752841e54344d17667ed7cbc693: Status 404 returned error can't find the container with id d4bf1d7c408122c41e63d9e84a828c448900e752841e54344d17667ed7cbc693
	Apr 10 22:18:29 multinode-824789 kubelet[3072]: E0410 22:18:29.692632    3072 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod8b2c1d24c176a5f0fdc05076676f83e4/crio-56ccb96fb9f1e89f8fb395d97342853122bd17c5680301c02a718eeb92d6f288: Error finding container 56ccb96fb9f1e89f8fb395d97342853122bd17c5680301c02a718eeb92d6f288: Status 404 returned error can't find the container with id 56ccb96fb9f1e89f8fb395d97342853122bd17c5680301c02a718eeb92d6f288
	Apr 10 22:18:29 multinode-824789 kubelet[3072]: E0410 22:18:29.692790    3072 manager.go:1116] Failed to create existing container: /kubepods/burstable/pode335e4d5-f65f-4722-b2c1-60e22cd08383/crio-8bbc7f26b3f24c6861865c8af008cf9dd186259b174c0903892d9620a11fa194: Error finding container 8bbc7f26b3f24c6861865c8af008cf9dd186259b174c0903892d9620a11fa194: Status 404 returned error can't find the container with id 8bbc7f26b3f24c6861865c8af008cf9dd186259b174c0903892d9620a11fa194
	Apr 10 22:18:29 multinode-824789 kubelet[3072]: E0410 22:18:29.692958    3072 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod6b6548b0f76d3607d58faa9b3e608948/crio-e55cc501e3962a8f290d13b7b6163694efdf66a722479c70c66c3181349addbd: Error finding container e55cc501e3962a8f290d13b7b6163694efdf66a722479c70c66c3181349addbd: Status 404 returned error can't find the container with id e55cc501e3962a8f290d13b7b6163694efdf66a722479c70c66c3181349addbd
	Apr 10 22:18:29 multinode-824789 kubelet[3072]: E0410 22:18:29.693223    3072 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod2800bfb120fc35f1c411b49e7bd24fc4/crio-c70ffd4456f7d99f07f71ea04094fe095d660779f4ccd28c1a07809a6301fd5f: Error finding container c70ffd4456f7d99f07f71ea04094fe095d660779f4ccd28c1a07809a6301fd5f: Status 404 returned error can't find the container with id c70ffd4456f7d99f07f71ea04094fe095d660779f4ccd28c1a07809a6301fd5f
	Apr 10 22:19:29 multinode-824789 kubelet[3072]: E0410 22:19:29.689310    3072 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 10 22:19:29 multinode-824789 kubelet[3072]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 10 22:19:29 multinode-824789 kubelet[3072]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 10 22:19:29 multinode-824789 kubelet[3072]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 10 22:19:29 multinode-824789 kubelet[3072]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 10 22:19:29 multinode-824789 kubelet[3072]: E0410 22:19:29.693765    3072 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod6bc151d6-2081-4f28-80d9-f5bbc795697e/crio-a4899072a08fffc872259437814850aecb1970fc1a147fae53163f8bc6ae6e65: Error finding container a4899072a08fffc872259437814850aecb1970fc1a147fae53163f8bc6ae6e65: Status 404 returned error can't find the container with id a4899072a08fffc872259437814850aecb1970fc1a147fae53163f8bc6ae6e65
	Apr 10 22:19:29 multinode-824789 kubelet[3072]: E0410 22:19:29.694015    3072 manager.go:1116] Failed to create existing container: /kubepods/pod7169290a-557c-4861-8ecd-e2a0b2c0b290/crio-9a40c2487b0b89099cb2ad8a18821f8a5444c70f640b534d814587812abcbc1e: Error finding container 9a40c2487b0b89099cb2ad8a18821f8a5444c70f640b534d814587812abcbc1e: Status 404 returned error can't find the container with id 9a40c2487b0b89099cb2ad8a18821f8a5444c70f640b534d814587812abcbc1e
	Apr 10 22:19:29 multinode-824789 kubelet[3072]: E0410 22:19:29.694268    3072 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pode571cab5-3579-4616-90f8-a9c465e70ace/crio-97fdf93300610640a55f8d9f26679f11ab52b106d47a3c55393773a007cbdbce: Error finding container 97fdf93300610640a55f8d9f26679f11ab52b106d47a3c55393773a007cbdbce: Status 404 returned error can't find the container with id 97fdf93300610640a55f8d9f26679f11ab52b106d47a3c55393773a007cbdbce
	Apr 10 22:19:29 multinode-824789 kubelet[3072]: E0410 22:19:29.694421    3072 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod2800bfb120fc35f1c411b49e7bd24fc4/crio-c70ffd4456f7d99f07f71ea04094fe095d660779f4ccd28c1a07809a6301fd5f: Error finding container c70ffd4456f7d99f07f71ea04094fe095d660779f4ccd28c1a07809a6301fd5f: Status 404 returned error can't find the container with id c70ffd4456f7d99f07f71ea04094fe095d660779f4ccd28c1a07809a6301fd5f
	Apr 10 22:19:29 multinode-824789 kubelet[3072]: E0410 22:19:29.694521    3072 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod8b2c1d24c176a5f0fdc05076676f83e4/crio-56ccb96fb9f1e89f8fb395d97342853122bd17c5680301c02a718eeb92d6f288: Error finding container 56ccb96fb9f1e89f8fb395d97342853122bd17c5680301c02a718eeb92d6f288: Status 404 returned error can't find the container with id 56ccb96fb9f1e89f8fb395d97342853122bd17c5680301c02a718eeb92d6f288
	Apr 10 22:19:29 multinode-824789 kubelet[3072]: E0410 22:19:29.695144    3072 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podf84d3580-83d9-497d-bc27-9d1112849093/crio-d4bf1d7c408122c41e63d9e84a828c448900e752841e54344d17667ed7cbc693: Error finding container d4bf1d7c408122c41e63d9e84a828c448900e752841e54344d17667ed7cbc693: Status 404 returned error can't find the container with id d4bf1d7c408122c41e63d9e84a828c448900e752841e54344d17667ed7cbc693
	Apr 10 22:19:29 multinode-824789 kubelet[3072]: E0410 22:19:29.695337    3072 manager.go:1116] Failed to create existing container: /kubepods/burstable/pode335e4d5-f65f-4722-b2c1-60e22cd08383/crio-8bbc7f26b3f24c6861865c8af008cf9dd186259b174c0903892d9620a11fa194: Error finding container 8bbc7f26b3f24c6861865c8af008cf9dd186259b174c0903892d9620a11fa194: Status 404 returned error can't find the container with id 8bbc7f26b3f24c6861865c8af008cf9dd186259b174c0903892d9620a11fa194
	Apr 10 22:19:29 multinode-824789 kubelet[3072]: E0410 22:19:29.695510    3072 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod6b6548b0f76d3607d58faa9b3e608948/crio-e55cc501e3962a8f290d13b7b6163694efdf66a722479c70c66c3181349addbd: Error finding container e55cc501e3962a8f290d13b7b6163694efdf66a722479c70c66c3181349addbd: Status 404 returned error can't find the container with id e55cc501e3962a8f290d13b7b6163694efdf66a722479c70c66c3181349addbd
	Apr 10 22:19:29 multinode-824789 kubelet[3072]: E0410 22:19:29.695678    3072 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod12b4e0e3d4dfd3581ea04dc539f54186/crio-136bc181084da378703a45f321e6e2b0e3b00a5fa6706a8f58b77d519a60fd70: Error finding container 136bc181084da378703a45f321e6e2b0e3b00a5fa6706a8f58b77d519a60fd70: Status 404 returned error can't find the container with id 136bc181084da378703a45f321e6e2b0e3b00a5fa6706a8f58b77d519a60fd70
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0410 22:20:20.167606   42628 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18610-5679/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-824789 -n multinode-824789
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-824789 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.63s)

                                                
                                    
x
+
TestPreload (338.59s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-804609 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0410 22:26:54.112745   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
E0410 22:26:59.610651   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-804609 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (3m15.880713337s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-804609 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-804609 image pull gcr.io/k8s-minikube/busybox: (2.826298575s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-804609
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-804609: exit status 82 (2m0.504146544s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-804609"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-804609 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-04-10 22:29:20.455345055 +0000 UTC m=+3682.691774331
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-804609 -n test-preload-804609
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-804609 -n test-preload-804609: exit status 3 (18.441559295s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0410 22:29:38.892767   45646 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.186:22: connect: no route to host
	E0410 22:29:38.892791   45646 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.186:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-804609" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-804609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-804609
--- FAIL: TestPreload (338.59s)

                                                
                                    
x
+
TestKubernetesUpgrade (345.35s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-407031 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-407031 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m48.986194661s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-407031] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18610
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-407031" primary control-plane node in "kubernetes-upgrade-407031" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0410 22:35:28.499109   52176 out.go:291] Setting OutFile to fd 1 ...
	I0410 22:35:28.499247   52176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:35:28.499263   52176 out.go:304] Setting ErrFile to fd 2...
	I0410 22:35:28.499270   52176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:35:28.499457   52176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 22:35:28.500069   52176 out.go:298] Setting JSON to false
	I0410 22:35:28.501113   52176 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4671,"bootTime":1712783858,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0410 22:35:28.501178   52176 start.go:139] virtualization: kvm guest
	I0410 22:35:28.503234   52176 out.go:177] * [kubernetes-upgrade-407031] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0410 22:35:28.505069   52176 out.go:177]   - MINIKUBE_LOCATION=18610
	I0410 22:35:28.506545   52176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0410 22:35:28.505030   52176 notify.go:220] Checking for updates...
	I0410 22:35:28.509258   52176 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:35:28.510694   52176 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 22:35:28.512015   52176 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0410 22:35:28.513236   52176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0410 22:35:28.515014   52176 config.go:182] Loaded profile config "NoKubernetes-857710": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0410 22:35:28.515187   52176 config.go:182] Loaded profile config "cert-expiration-464519": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:35:28.515342   52176 config.go:182] Loaded profile config "pause-262675": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:35:28.515485   52176 driver.go:392] Setting default libvirt URI to qemu:///system
	I0410 22:35:28.558128   52176 out.go:177] * Using the kvm2 driver based on user configuration
	I0410 22:35:28.559405   52176 start.go:297] selected driver: kvm2
	I0410 22:35:28.559418   52176 start.go:901] validating driver "kvm2" against <nil>
	I0410 22:35:28.559429   52176 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0410 22:35:28.560085   52176 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 22:35:28.560164   52176 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18610-5679/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0410 22:35:28.576036   52176 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0410 22:35:28.576085   52176 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0410 22:35:28.576280   52176 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0410 22:35:28.576338   52176 cni.go:84] Creating CNI manager for ""
	I0410 22:35:28.576350   52176 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:35:28.576359   52176 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0410 22:35:28.576454   52176 start.go:340] cluster config:
	{Name:kubernetes-upgrade-407031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-407031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:35:28.576546   52176 iso.go:125] acquiring lock: {Name:mk5a268a8a4dbf02629446a098c1de60574dce10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 22:35:28.578294   52176 out.go:177] * Starting "kubernetes-upgrade-407031" primary control-plane node in "kubernetes-upgrade-407031" cluster
	I0410 22:35:28.579566   52176 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0410 22:35:28.579598   52176 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0410 22:35:28.579608   52176 cache.go:56] Caching tarball of preloaded images
	I0410 22:35:28.579671   52176 preload.go:173] Found /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0410 22:35:28.579682   52176 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0410 22:35:28.579789   52176 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/config.json ...
	I0410 22:35:28.579808   52176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/config.json: {Name:mk07754705fcf258573c5350f10f5d88ac9b08d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:35:28.579951   52176 start.go:360] acquireMachinesLock for kubernetes-upgrade-407031: {Name:mkcfcfa8b01b63726303cd37d33f01d452118635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0410 22:35:43.042752   52176 start.go:364] duration metric: took 14.462775681s to acquireMachinesLock for "kubernetes-upgrade-407031"
	I0410 22:35:43.042818   52176 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-407031 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-407031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0410 22:35:43.042948   52176 start.go:125] createHost starting for "" (driver="kvm2")
	I0410 22:35:43.044637   52176 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0410 22:35:43.044845   52176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:35:43.044898   52176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:35:43.062990   52176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43935
	I0410 22:35:43.063541   52176 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:35:43.064251   52176 main.go:141] libmachine: Using API Version  1
	I0410 22:35:43.064279   52176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:35:43.064667   52176 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:35:43.064877   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetMachineName
	I0410 22:35:43.065050   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .DriverName
	I0410 22:35:43.065174   52176 start.go:159] libmachine.API.Create for "kubernetes-upgrade-407031" (driver="kvm2")
	I0410 22:35:43.065201   52176 client.go:168] LocalClient.Create starting
	I0410 22:35:43.065253   52176 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem
	I0410 22:35:43.065298   52176 main.go:141] libmachine: Decoding PEM data...
	I0410 22:35:43.065319   52176 main.go:141] libmachine: Parsing certificate...
	I0410 22:35:43.065379   52176 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem
	I0410 22:35:43.065401   52176 main.go:141] libmachine: Decoding PEM data...
	I0410 22:35:43.065414   52176 main.go:141] libmachine: Parsing certificate...
	I0410 22:35:43.065441   52176 main.go:141] libmachine: Running pre-create checks...
	I0410 22:35:43.065450   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .PreCreateCheck
	I0410 22:35:43.065927   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetConfigRaw
	I0410 22:35:43.066347   52176 main.go:141] libmachine: Creating machine...
	I0410 22:35:43.066362   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .Create
	I0410 22:35:43.066503   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Creating KVM machine...
	I0410 22:35:43.067885   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | found existing default KVM network
	I0410 22:35:43.069519   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | I0410 22:35:43.069350   52238 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f7e0}
	I0410 22:35:43.069553   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | created network xml: 
	I0410 22:35:43.069564   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | <network>
	I0410 22:35:43.069571   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG |   <name>mk-kubernetes-upgrade-407031</name>
	I0410 22:35:43.069583   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG |   <dns enable='no'/>
	I0410 22:35:43.069596   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG |   
	I0410 22:35:43.069605   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0410 22:35:43.069610   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG |     <dhcp>
	I0410 22:35:43.069617   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0410 22:35:43.069626   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG |     </dhcp>
	I0410 22:35:43.069638   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG |   </ip>
	I0410 22:35:43.069646   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG |   
	I0410 22:35:43.069664   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | </network>
	I0410 22:35:43.069680   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | 
	I0410 22:35:43.075201   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | trying to create private KVM network mk-kubernetes-upgrade-407031 192.168.39.0/24...
	I0410 22:35:43.164967   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | private KVM network mk-kubernetes-upgrade-407031 192.168.39.0/24 created
	I0410 22:35:43.165005   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | I0410 22:35:43.164941   52238 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 22:35:43.165020   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Setting up store path in /home/jenkins/minikube-integration/18610-5679/.minikube/machines/kubernetes-upgrade-407031 ...
	I0410 22:35:43.165042   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Building disk image from file:///home/jenkins/minikube-integration/18610-5679/.minikube/cache/iso/amd64/minikube-v1.33.0-1712743565-18610-amd64.iso
	I0410 22:35:43.165063   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Downloading /home/jenkins/minikube-integration/18610-5679/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18610-5679/.minikube/cache/iso/amd64/minikube-v1.33.0-1712743565-18610-amd64.iso...
	I0410 22:35:43.402186   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | I0410 22:35:43.402045   52238 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/kubernetes-upgrade-407031/id_rsa...
	I0410 22:35:43.589933   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | I0410 22:35:43.589755   52238 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/kubernetes-upgrade-407031/kubernetes-upgrade-407031.rawdisk...
	I0410 22:35:43.589970   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | Writing magic tar header
	I0410 22:35:43.589988   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | Writing SSH key tar header
	I0410 22:35:43.590001   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | I0410 22:35:43.589933   52238 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18610-5679/.minikube/machines/kubernetes-upgrade-407031 ...
	I0410 22:35:43.590079   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/kubernetes-upgrade-407031
	I0410 22:35:43.590108   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18610-5679/.minikube/machines
	I0410 22:35:43.590123   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 22:35:43.590141   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Setting executable bit set on /home/jenkins/minikube-integration/18610-5679/.minikube/machines/kubernetes-upgrade-407031 (perms=drwx------)
	I0410 22:35:43.590163   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Setting executable bit set on /home/jenkins/minikube-integration/18610-5679/.minikube/machines (perms=drwxr-xr-x)
	I0410 22:35:43.590179   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Setting executable bit set on /home/jenkins/minikube-integration/18610-5679/.minikube (perms=drwxr-xr-x)
	I0410 22:35:43.590190   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18610-5679
	I0410 22:35:43.590208   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0410 22:35:43.590223   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | Checking permissions on dir: /home/jenkins
	I0410 22:35:43.590233   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Setting executable bit set on /home/jenkins/minikube-integration/18610-5679 (perms=drwxrwxr-x)
	I0410 22:35:43.590256   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0410 22:35:43.590273   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0410 22:35:43.590286   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | Checking permissions on dir: /home
	I0410 22:35:43.590301   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | Skipping /home - not owner
	I0410 22:35:43.590314   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Creating domain...
	I0410 22:35:43.591519   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) define libvirt domain using xml: 
	I0410 22:35:43.591542   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) <domain type='kvm'>
	I0410 22:35:43.591554   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)   <name>kubernetes-upgrade-407031</name>
	I0410 22:35:43.591565   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)   <memory unit='MiB'>2200</memory>
	I0410 22:35:43.591574   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)   <vcpu>2</vcpu>
	I0410 22:35:43.591580   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)   <features>
	I0410 22:35:43.591589   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)     <acpi/>
	I0410 22:35:43.591601   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)     <apic/>
	I0410 22:35:43.591609   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)     <pae/>
	I0410 22:35:43.591633   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)     
	I0410 22:35:43.591643   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)   </features>
	I0410 22:35:43.591655   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)   <cpu mode='host-passthrough'>
	I0410 22:35:43.591665   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)   
	I0410 22:35:43.591674   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)   </cpu>
	I0410 22:35:43.591681   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)   <os>
	I0410 22:35:43.591688   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)     <type>hvm</type>
	I0410 22:35:43.591699   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)     <boot dev='cdrom'/>
	I0410 22:35:43.591708   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)     <boot dev='hd'/>
	I0410 22:35:43.591715   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)     <bootmenu enable='no'/>
	I0410 22:35:43.591725   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)   </os>
	I0410 22:35:43.591733   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)   <devices>
	I0410 22:35:43.591744   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)     <disk type='file' device='cdrom'>
	I0410 22:35:43.591762   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)       <source file='/home/jenkins/minikube-integration/18610-5679/.minikube/machines/kubernetes-upgrade-407031/boot2docker.iso'/>
	I0410 22:35:43.591773   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)       <target dev='hdc' bus='scsi'/>
	I0410 22:35:43.591782   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)       <readonly/>
	I0410 22:35:43.591792   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)     </disk>
	I0410 22:35:43.591799   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)     <disk type='file' device='disk'>
	I0410 22:35:43.591811   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0410 22:35:43.591826   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)       <source file='/home/jenkins/minikube-integration/18610-5679/.minikube/machines/kubernetes-upgrade-407031/kubernetes-upgrade-407031.rawdisk'/>
	I0410 22:35:43.591836   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)       <target dev='hda' bus='virtio'/>
	I0410 22:35:43.591847   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)     </disk>
	I0410 22:35:43.591857   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)     <interface type='network'>
	I0410 22:35:43.591866   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)       <source network='mk-kubernetes-upgrade-407031'/>
	I0410 22:35:43.591876   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)       <model type='virtio'/>
	I0410 22:35:43.591886   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)     </interface>
	I0410 22:35:43.591895   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)     <interface type='network'>
	I0410 22:35:43.591910   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)       <source network='default'/>
	I0410 22:35:43.591920   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)       <model type='virtio'/>
	I0410 22:35:43.591928   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)     </interface>
	I0410 22:35:43.591946   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)     <serial type='pty'>
	I0410 22:35:43.591963   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)       <target port='0'/>
	I0410 22:35:43.591971   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)     </serial>
	I0410 22:35:43.591980   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)     <console type='pty'>
	I0410 22:35:43.591992   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)       <target type='serial' port='0'/>
	I0410 22:35:43.592003   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)     </console>
	I0410 22:35:43.592015   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)     <rng model='virtio'>
	I0410 22:35:43.592027   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)       <backend model='random'>/dev/random</backend>
	I0410 22:35:43.592037   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)     </rng>
	I0410 22:35:43.592043   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)     
	I0410 22:35:43.592051   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)     
	I0410 22:35:43.592059   52176 main.go:141] libmachine: (kubernetes-upgrade-407031)   </devices>
	I0410 22:35:43.592068   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) </domain>
	I0410 22:35:43.592077   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) 
	I0410 22:35:43.596685   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:c3:76:1c in network default
	I0410 22:35:43.597290   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Ensuring networks are active...
	I0410 22:35:43.597338   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:35:43.598194   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Ensuring network default is active
	I0410 22:35:43.598646   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Ensuring network mk-kubernetes-upgrade-407031 is active
	I0410 22:35:43.599364   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Getting domain xml...
	I0410 22:35:43.600292   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Creating domain...
	I0410 22:35:45.065957   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Waiting to get IP...
	I0410 22:35:45.067034   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:35:45.067510   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | unable to find current IP address of domain kubernetes-upgrade-407031 in network mk-kubernetes-upgrade-407031
	I0410 22:35:45.067548   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | I0410 22:35:45.067475   52238 retry.go:31] will retry after 213.876006ms: waiting for machine to come up
	I0410 22:35:45.282869   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:35:45.283402   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | unable to find current IP address of domain kubernetes-upgrade-407031 in network mk-kubernetes-upgrade-407031
	I0410 22:35:45.283437   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | I0410 22:35:45.283356   52238 retry.go:31] will retry after 243.275255ms: waiting for machine to come up
	I0410 22:35:45.528772   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:35:45.529321   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | unable to find current IP address of domain kubernetes-upgrade-407031 in network mk-kubernetes-upgrade-407031
	I0410 22:35:45.529358   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | I0410 22:35:45.529260   52238 retry.go:31] will retry after 340.88049ms: waiting for machine to come up
	I0410 22:35:46.147172   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:35:46.147783   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | unable to find current IP address of domain kubernetes-upgrade-407031 in network mk-kubernetes-upgrade-407031
	I0410 22:35:46.147823   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | I0410 22:35:46.147748   52238 retry.go:31] will retry after 597.833622ms: waiting for machine to come up
	I0410 22:35:46.747625   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:35:46.748082   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | unable to find current IP address of domain kubernetes-upgrade-407031 in network mk-kubernetes-upgrade-407031
	I0410 22:35:46.748112   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | I0410 22:35:46.748036   52238 retry.go:31] will retry after 610.795651ms: waiting for machine to come up
	I0410 22:35:47.361100   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:35:47.361568   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | unable to find current IP address of domain kubernetes-upgrade-407031 in network mk-kubernetes-upgrade-407031
	I0410 22:35:47.361603   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | I0410 22:35:47.361516   52238 retry.go:31] will retry after 934.76558ms: waiting for machine to come up
	I0410 22:35:48.297866   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:35:48.298221   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | unable to find current IP address of domain kubernetes-upgrade-407031 in network mk-kubernetes-upgrade-407031
	I0410 22:35:48.298274   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | I0410 22:35:48.298178   52238 retry.go:31] will retry after 957.375998ms: waiting for machine to come up
	I0410 22:35:49.257551   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:35:49.258070   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | unable to find current IP address of domain kubernetes-upgrade-407031 in network mk-kubernetes-upgrade-407031
	I0410 22:35:49.258105   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | I0410 22:35:49.258024   52238 retry.go:31] will retry after 1.32254566s: waiting for machine to come up
	I0410 22:35:50.582740   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:35:50.583304   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | unable to find current IP address of domain kubernetes-upgrade-407031 in network mk-kubernetes-upgrade-407031
	I0410 22:35:50.583328   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | I0410 22:35:50.583264   52238 retry.go:31] will retry after 1.333135754s: waiting for machine to come up
	I0410 22:35:51.918904   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:35:51.919479   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | unable to find current IP address of domain kubernetes-upgrade-407031 in network mk-kubernetes-upgrade-407031
	I0410 22:35:51.919505   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | I0410 22:35:51.919432   52238 retry.go:31] will retry after 1.477906752s: waiting for machine to come up
	I0410 22:35:53.399035   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:35:53.399509   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | unable to find current IP address of domain kubernetes-upgrade-407031 in network mk-kubernetes-upgrade-407031
	I0410 22:35:53.399534   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | I0410 22:35:53.399462   52238 retry.go:31] will retry after 2.55977414s: waiting for machine to come up
	I0410 22:35:55.961672   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:35:55.962262   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | unable to find current IP address of domain kubernetes-upgrade-407031 in network mk-kubernetes-upgrade-407031
	I0410 22:35:55.962290   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | I0410 22:35:55.962220   52238 retry.go:31] will retry after 2.932804291s: waiting for machine to come up
	I0410 22:35:58.896178   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:35:58.896778   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | unable to find current IP address of domain kubernetes-upgrade-407031 in network mk-kubernetes-upgrade-407031
	I0410 22:35:58.896807   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | I0410 22:35:58.896712   52238 retry.go:31] will retry after 3.546406193s: waiting for machine to come up
	I0410 22:36:02.447457   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:02.447902   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | unable to find current IP address of domain kubernetes-upgrade-407031 in network mk-kubernetes-upgrade-407031
	I0410 22:36:02.447930   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | I0410 22:36:02.447858   52238 retry.go:31] will retry after 3.820126055s: waiting for machine to come up
	I0410 22:36:06.269865   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:06.270344   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Found IP for machine: 192.168.39.180
	I0410 22:36:06.270371   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has current primary IP address 192.168.39.180 and MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:06.270380   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Reserving static IP address...
	I0410 22:36:06.270760   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-407031", mac: "52:54:00:f3:0f:38", ip: "192.168.39.180"} in network mk-kubernetes-upgrade-407031
	I0410 22:36:06.346436   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | Getting to WaitForSSH function...
	I0410 22:36:06.346474   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Reserved static IP address: 192.168.39.180
	I0410 22:36:06.346488   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Waiting for SSH to be available...
	I0410 22:36:06.349207   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:06.349550   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:f3:0f:38", ip: ""} in network mk-kubernetes-upgrade-407031
	I0410 22:36:06.349580   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | unable to find defined IP address of network mk-kubernetes-upgrade-407031 interface with MAC address 52:54:00:f3:0f:38
	I0410 22:36:06.349703   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | Using SSH client type: external
	I0410 22:36:06.349753   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | Using SSH private key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/kubernetes-upgrade-407031/id_rsa (-rw-------)
	I0410 22:36:06.349836   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18610-5679/.minikube/machines/kubernetes-upgrade-407031/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0410 22:36:06.349866   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | About to run SSH command:
	I0410 22:36:06.349884   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | exit 0
	I0410 22:36:06.353501   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | SSH cmd err, output: exit status 255: 
	I0410 22:36:06.353527   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0410 22:36:06.353538   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | command : exit 0
	I0410 22:36:06.353545   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | err     : exit status 255
	I0410 22:36:06.353556   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | output  : 
	I0410 22:36:09.354533   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | Getting to WaitForSSH function...
	I0410 22:36:09.357410   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:09.357758   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:0f:38", ip: ""} in network mk-kubernetes-upgrade-407031: {Iface:virbr3 ExpiryTime:2024-04-10 23:35:58 +0000 UTC Type:0 Mac:52:54:00:f3:0f:38 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:kubernetes-upgrade-407031 Clientid:01:52:54:00:f3:0f:38}
	I0410 22:36:09.357784   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined IP address 192.168.39.180 and MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:09.358011   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | Using SSH client type: external
	I0410 22:36:09.358039   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | Using SSH private key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/kubernetes-upgrade-407031/id_rsa (-rw-------)
	I0410 22:36:09.358066   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18610-5679/.minikube/machines/kubernetes-upgrade-407031/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0410 22:36:09.358080   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | About to run SSH command:
	I0410 22:36:09.358095   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | exit 0
	I0410 22:36:09.488920   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | SSH cmd err, output: <nil>: 
	I0410 22:36:09.489194   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) KVM machine creation complete!
	I0410 22:36:09.489565   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetConfigRaw
	I0410 22:36:09.490118   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .DriverName
	I0410 22:36:09.490309   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .DriverName
	I0410 22:36:09.490450   52176 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0410 22:36:09.490462   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetState
	I0410 22:36:09.491944   52176 main.go:141] libmachine: Detecting operating system of created instance...
	I0410 22:36:09.491964   52176 main.go:141] libmachine: Waiting for SSH to be available...
	I0410 22:36:09.491972   52176 main.go:141] libmachine: Getting to WaitForSSH function...
	I0410 22:36:09.491981   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHHostname
	I0410 22:36:09.494466   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:09.494836   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:0f:38", ip: ""} in network mk-kubernetes-upgrade-407031: {Iface:virbr3 ExpiryTime:2024-04-10 23:35:58 +0000 UTC Type:0 Mac:52:54:00:f3:0f:38 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:kubernetes-upgrade-407031 Clientid:01:52:54:00:f3:0f:38}
	I0410 22:36:09.494861   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined IP address 192.168.39.180 and MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:09.495006   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHPort
	I0410 22:36:09.495205   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHKeyPath
	I0410 22:36:09.495415   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHKeyPath
	I0410 22:36:09.495590   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHUsername
	I0410 22:36:09.495782   52176 main.go:141] libmachine: Using SSH client type: native
	I0410 22:36:09.495977   52176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0410 22:36:09.495989   52176 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0410 22:36:09.608052   52176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:36:09.608077   52176 main.go:141] libmachine: Detecting the provisioner...
	I0410 22:36:09.608088   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHHostname
	I0410 22:36:09.611076   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:09.611498   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:0f:38", ip: ""} in network mk-kubernetes-upgrade-407031: {Iface:virbr3 ExpiryTime:2024-04-10 23:35:58 +0000 UTC Type:0 Mac:52:54:00:f3:0f:38 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:kubernetes-upgrade-407031 Clientid:01:52:54:00:f3:0f:38}
	I0410 22:36:09.611538   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined IP address 192.168.39.180 and MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:09.611648   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHPort
	I0410 22:36:09.611846   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHKeyPath
	I0410 22:36:09.612007   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHKeyPath
	I0410 22:36:09.612134   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHUsername
	I0410 22:36:09.612304   52176 main.go:141] libmachine: Using SSH client type: native
	I0410 22:36:09.612546   52176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0410 22:36:09.612559   52176 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0410 22:36:09.730066   52176 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0410 22:36:09.730169   52176 main.go:141] libmachine: found compatible host: buildroot
	I0410 22:36:09.730185   52176 main.go:141] libmachine: Provisioning with buildroot...
	I0410 22:36:09.730193   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetMachineName
	I0410 22:36:09.730453   52176 buildroot.go:166] provisioning hostname "kubernetes-upgrade-407031"
	I0410 22:36:09.730478   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetMachineName
	I0410 22:36:09.730691   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHHostname
	I0410 22:36:09.733631   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:09.733997   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:0f:38", ip: ""} in network mk-kubernetes-upgrade-407031: {Iface:virbr3 ExpiryTime:2024-04-10 23:35:58 +0000 UTC Type:0 Mac:52:54:00:f3:0f:38 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:kubernetes-upgrade-407031 Clientid:01:52:54:00:f3:0f:38}
	I0410 22:36:09.734028   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined IP address 192.168.39.180 and MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:09.734122   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHPort
	I0410 22:36:09.734311   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHKeyPath
	I0410 22:36:09.734533   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHKeyPath
	I0410 22:36:09.734683   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHUsername
	I0410 22:36:09.734845   52176 main.go:141] libmachine: Using SSH client type: native
	I0410 22:36:09.735060   52176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0410 22:36:09.735078   52176 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-407031 && echo "kubernetes-upgrade-407031" | sudo tee /etc/hostname
	I0410 22:36:09.868833   52176 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-407031
	
	I0410 22:36:09.868869   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHHostname
	I0410 22:36:09.871732   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:09.872065   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:0f:38", ip: ""} in network mk-kubernetes-upgrade-407031: {Iface:virbr3 ExpiryTime:2024-04-10 23:35:58 +0000 UTC Type:0 Mac:52:54:00:f3:0f:38 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:kubernetes-upgrade-407031 Clientid:01:52:54:00:f3:0f:38}
	I0410 22:36:09.872092   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined IP address 192.168.39.180 and MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:09.872276   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHPort
	I0410 22:36:09.872454   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHKeyPath
	I0410 22:36:09.872610   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHKeyPath
	I0410 22:36:09.872808   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHUsername
	I0410 22:36:09.873027   52176 main.go:141] libmachine: Using SSH client type: native
	I0410 22:36:09.873208   52176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0410 22:36:09.873224   52176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-407031' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-407031/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-407031' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:36:09.994874   52176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:36:09.994908   52176 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:36:09.994933   52176 buildroot.go:174] setting up certificates
	I0410 22:36:09.994945   52176 provision.go:84] configureAuth start
	I0410 22:36:09.994974   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetMachineName
	I0410 22:36:09.995275   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetIP
	I0410 22:36:09.998180   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:09.998626   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:0f:38", ip: ""} in network mk-kubernetes-upgrade-407031: {Iface:virbr3 ExpiryTime:2024-04-10 23:35:58 +0000 UTC Type:0 Mac:52:54:00:f3:0f:38 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:kubernetes-upgrade-407031 Clientid:01:52:54:00:f3:0f:38}
	I0410 22:36:09.998652   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined IP address 192.168.39.180 and MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:09.998826   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHHostname
	I0410 22:36:10.000905   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:10.001314   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:0f:38", ip: ""} in network mk-kubernetes-upgrade-407031: {Iface:virbr3 ExpiryTime:2024-04-10 23:35:58 +0000 UTC Type:0 Mac:52:54:00:f3:0f:38 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:kubernetes-upgrade-407031 Clientid:01:52:54:00:f3:0f:38}
	I0410 22:36:10.001357   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined IP address 192.168.39.180 and MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:10.001497   52176 provision.go:143] copyHostCerts
	I0410 22:36:10.001554   52176 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:36:10.001570   52176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:36:10.001639   52176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:36:10.001742   52176 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:36:10.001751   52176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:36:10.001778   52176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:36:10.001887   52176 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:36:10.001897   52176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:36:10.001923   52176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:36:10.001985   52176 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-407031 san=[127.0.0.1 192.168.39.180 kubernetes-upgrade-407031 localhost minikube]
	I0410 22:36:10.088322   52176 provision.go:177] copyRemoteCerts
	I0410 22:36:10.088383   52176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:36:10.088433   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHHostname
	I0410 22:36:10.090946   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:10.091261   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:0f:38", ip: ""} in network mk-kubernetes-upgrade-407031: {Iface:virbr3 ExpiryTime:2024-04-10 23:35:58 +0000 UTC Type:0 Mac:52:54:00:f3:0f:38 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:kubernetes-upgrade-407031 Clientid:01:52:54:00:f3:0f:38}
	I0410 22:36:10.091294   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined IP address 192.168.39.180 and MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:10.091508   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHPort
	I0410 22:36:10.091709   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHKeyPath
	I0410 22:36:10.091868   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHUsername
	I0410 22:36:10.092044   52176 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/kubernetes-upgrade-407031/id_rsa Username:docker}
	I0410 22:36:10.179569   52176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0410 22:36:10.206465   52176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:36:10.233384   52176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0410 22:36:10.260476   52176 provision.go:87] duration metric: took 265.514952ms to configureAuth
	I0410 22:36:10.260518   52176 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:36:10.260692   52176 config.go:182] Loaded profile config "kubernetes-upgrade-407031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0410 22:36:10.260806   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHHostname
	I0410 22:36:10.263926   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:10.264357   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:0f:38", ip: ""} in network mk-kubernetes-upgrade-407031: {Iface:virbr3 ExpiryTime:2024-04-10 23:35:58 +0000 UTC Type:0 Mac:52:54:00:f3:0f:38 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:kubernetes-upgrade-407031 Clientid:01:52:54:00:f3:0f:38}
	I0410 22:36:10.264431   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined IP address 192.168.39.180 and MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:10.264684   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHPort
	I0410 22:36:10.264917   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHKeyPath
	I0410 22:36:10.265086   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHKeyPath
	I0410 22:36:10.265301   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHUsername
	I0410 22:36:10.265496   52176 main.go:141] libmachine: Using SSH client type: native
	I0410 22:36:10.265764   52176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0410 22:36:10.265805   52176 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:36:10.565311   52176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:36:10.565370   52176 main.go:141] libmachine: Checking connection to Docker...
	I0410 22:36:10.565383   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetURL
	I0410 22:36:10.566678   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | Using libvirt version 6000000
	I0410 22:36:10.568595   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:10.568972   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:0f:38", ip: ""} in network mk-kubernetes-upgrade-407031: {Iface:virbr3 ExpiryTime:2024-04-10 23:35:58 +0000 UTC Type:0 Mac:52:54:00:f3:0f:38 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:kubernetes-upgrade-407031 Clientid:01:52:54:00:f3:0f:38}
	I0410 22:36:10.569022   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined IP address 192.168.39.180 and MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:10.569132   52176 main.go:141] libmachine: Docker is up and running!
	I0410 22:36:10.569157   52176 main.go:141] libmachine: Reticulating splines...
	I0410 22:36:10.569164   52176 client.go:171] duration metric: took 27.503956795s to LocalClient.Create
	I0410 22:36:10.569188   52176 start.go:167] duration metric: took 27.504014878s to libmachine.API.Create "kubernetes-upgrade-407031"
	I0410 22:36:10.569201   52176 start.go:293] postStartSetup for "kubernetes-upgrade-407031" (driver="kvm2")
	I0410 22:36:10.569215   52176 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:36:10.569234   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .DriverName
	I0410 22:36:10.569504   52176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:36:10.569528   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHHostname
	I0410 22:36:10.571717   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:10.572121   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:0f:38", ip: ""} in network mk-kubernetes-upgrade-407031: {Iface:virbr3 ExpiryTime:2024-04-10 23:35:58 +0000 UTC Type:0 Mac:52:54:00:f3:0f:38 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:kubernetes-upgrade-407031 Clientid:01:52:54:00:f3:0f:38}
	I0410 22:36:10.572151   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined IP address 192.168.39.180 and MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:10.572313   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHPort
	I0410 22:36:10.572500   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHKeyPath
	I0410 22:36:10.572663   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHUsername
	I0410 22:36:10.572821   52176 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/kubernetes-upgrade-407031/id_rsa Username:docker}
	I0410 22:36:10.659048   52176 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:36:10.664200   52176 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:36:10.664232   52176 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:36:10.664304   52176 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:36:10.664376   52176 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:36:10.664529   52176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:36:10.676413   52176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:36:10.704384   52176 start.go:296] duration metric: took 135.166065ms for postStartSetup
	I0410 22:36:10.704460   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetConfigRaw
	I0410 22:36:10.705075   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetIP
	I0410 22:36:10.707902   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:10.708221   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:0f:38", ip: ""} in network mk-kubernetes-upgrade-407031: {Iface:virbr3 ExpiryTime:2024-04-10 23:35:58 +0000 UTC Type:0 Mac:52:54:00:f3:0f:38 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:kubernetes-upgrade-407031 Clientid:01:52:54:00:f3:0f:38}
	I0410 22:36:10.708254   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined IP address 192.168.39.180 and MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:10.708517   52176 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/config.json ...
	I0410 22:36:10.708716   52176 start.go:128] duration metric: took 27.665755135s to createHost
	I0410 22:36:10.708739   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHHostname
	I0410 22:36:10.711225   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:10.711596   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:0f:38", ip: ""} in network mk-kubernetes-upgrade-407031: {Iface:virbr3 ExpiryTime:2024-04-10 23:35:58 +0000 UTC Type:0 Mac:52:54:00:f3:0f:38 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:kubernetes-upgrade-407031 Clientid:01:52:54:00:f3:0f:38}
	I0410 22:36:10.711623   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined IP address 192.168.39.180 and MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:10.711758   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHPort
	I0410 22:36:10.711972   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHKeyPath
	I0410 22:36:10.712147   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHKeyPath
	I0410 22:36:10.712328   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHUsername
	I0410 22:36:10.712530   52176 main.go:141] libmachine: Using SSH client type: native
	I0410 22:36:10.712737   52176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0410 22:36:10.712752   52176 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0410 22:36:10.829593   52176 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712788570.817266265
	
	I0410 22:36:10.829615   52176 fix.go:216] guest clock: 1712788570.817266265
	I0410 22:36:10.829622   52176 fix.go:229] Guest: 2024-04-10 22:36:10.817266265 +0000 UTC Remote: 2024-04-10 22:36:10.708728655 +0000 UTC m=+42.266539623 (delta=108.53761ms)
	I0410 22:36:10.829646   52176 fix.go:200] guest clock delta is within tolerance: 108.53761ms
	I0410 22:36:10.829662   52176 start.go:83] releasing machines lock for "kubernetes-upgrade-407031", held for 27.786881029s
	I0410 22:36:10.829691   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .DriverName
	I0410 22:36:10.829974   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetIP
	I0410 22:36:10.833099   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:10.833505   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:0f:38", ip: ""} in network mk-kubernetes-upgrade-407031: {Iface:virbr3 ExpiryTime:2024-04-10 23:35:58 +0000 UTC Type:0 Mac:52:54:00:f3:0f:38 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:kubernetes-upgrade-407031 Clientid:01:52:54:00:f3:0f:38}
	I0410 22:36:10.833535   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined IP address 192.168.39.180 and MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:10.833705   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .DriverName
	I0410 22:36:10.834235   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .DriverName
	I0410 22:36:10.834468   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .DriverName
	I0410 22:36:10.834568   52176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:36:10.834618   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHHostname
	I0410 22:36:10.834675   52176 ssh_runner.go:195] Run: cat /version.json
	I0410 22:36:10.834703   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHHostname
	I0410 22:36:10.837599   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:10.837705   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:10.837936   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:0f:38", ip: ""} in network mk-kubernetes-upgrade-407031: {Iface:virbr3 ExpiryTime:2024-04-10 23:35:58 +0000 UTC Type:0 Mac:52:54:00:f3:0f:38 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:kubernetes-upgrade-407031 Clientid:01:52:54:00:f3:0f:38}
	I0410 22:36:10.837978   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined IP address 192.168.39.180 and MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:10.838210   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:0f:38", ip: ""} in network mk-kubernetes-upgrade-407031: {Iface:virbr3 ExpiryTime:2024-04-10 23:35:58 +0000 UTC Type:0 Mac:52:54:00:f3:0f:38 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:kubernetes-upgrade-407031 Clientid:01:52:54:00:f3:0f:38}
	I0410 22:36:10.838242   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined IP address 192.168.39.180 and MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:10.838285   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHPort
	I0410 22:36:10.838444   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHPort
	I0410 22:36:10.838538   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHKeyPath
	I0410 22:36:10.838605   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHKeyPath
	I0410 22:36:10.838677   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHUsername
	I0410 22:36:10.838756   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHUsername
	I0410 22:36:10.838835   52176 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/kubernetes-upgrade-407031/id_rsa Username:docker}
	I0410 22:36:10.838868   52176 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/kubernetes-upgrade-407031/id_rsa Username:docker}
	I0410 22:36:10.951545   52176 ssh_runner.go:195] Run: systemctl --version
	I0410 22:36:10.961312   52176 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:36:11.143756   52176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 22:36:11.150193   52176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:36:11.150286   52176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:36:11.170456   52176 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0410 22:36:11.170482   52176 start.go:494] detecting cgroup driver to use...
	I0410 22:36:11.170542   52176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:36:11.190726   52176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:36:11.208363   52176 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:36:11.208455   52176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:36:11.225494   52176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:36:11.243027   52176 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:36:11.379003   52176 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:36:11.530428   52176 docker.go:233] disabling docker service ...
	I0410 22:36:11.530507   52176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:36:11.548187   52176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:36:11.563497   52176 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:36:11.710841   52176 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:36:11.833597   52176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:36:11.849358   52176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:36:11.872178   52176 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0410 22:36:11.872234   52176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:36:11.885448   52176 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:36:11.885520   52176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:36:11.900366   52176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:36:11.914759   52176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:36:11.929051   52176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:36:11.941843   52176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:36:11.952363   52176 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0410 22:36:11.952489   52176 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0410 22:36:11.966954   52176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:36:11.977880   52176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:36:12.099318   52176 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:36:12.250036   52176 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 22:36:12.250102   52176 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 22:36:12.256140   52176 start.go:562] Will wait 60s for crictl version
	I0410 22:36:12.256214   52176 ssh_runner.go:195] Run: which crictl
	I0410 22:36:12.260768   52176 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 22:36:12.304761   52176 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 22:36:12.304870   52176 ssh_runner.go:195] Run: crio --version
	I0410 22:36:12.338981   52176 ssh_runner.go:195] Run: crio --version
	I0410 22:36:12.386775   52176 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0410 22:36:12.387998   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetIP
	I0410 22:36:12.391387   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:12.391832   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:0f:38", ip: ""} in network mk-kubernetes-upgrade-407031: {Iface:virbr3 ExpiryTime:2024-04-10 23:35:58 +0000 UTC Type:0 Mac:52:54:00:f3:0f:38 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:kubernetes-upgrade-407031 Clientid:01:52:54:00:f3:0f:38}
	I0410 22:36:12.391866   52176 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined IP address 192.168.39.180 and MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:36:12.392095   52176 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0410 22:36:12.398824   52176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:36:12.413161   52176 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-407031 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-407031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 22:36:12.413310   52176 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0410 22:36:12.413373   52176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:36:12.457857   52176 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0410 22:36:12.457915   52176 ssh_runner.go:195] Run: which lz4
	I0410 22:36:12.464383   52176 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0410 22:36:12.469435   52176 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0410 22:36:12.469474   52176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0410 22:36:14.407969   52176 crio.go:462] duration metric: took 1.943642772s to copy over tarball
	I0410 22:36:14.408053   52176 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0410 22:36:17.322243   52176 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.914159291s)
	I0410 22:36:17.322278   52176 crio.go:469] duration metric: took 2.914274208s to extract the tarball
	I0410 22:36:17.322288   52176 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0410 22:36:17.371995   52176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:36:17.431012   52176 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0410 22:36:17.431041   52176 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0410 22:36:17.431117   52176 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:36:17.431169   52176 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:36:17.431206   52176 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0410 22:36:17.431230   52176 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0410 22:36:17.431368   52176 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:36:17.431182   52176 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0410 22:36:17.434810   52176 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:36:17.435194   52176 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:36:17.436254   52176 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0410 22:36:17.436607   52176 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0410 22:36:17.436634   52176 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:36:17.436663   52176 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:36:17.436608   52176 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:36:17.437006   52176 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:36:17.437205   52176 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0410 22:36:17.437615   52176 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:36:17.668898   52176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:36:17.674576   52176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:36:17.692144   52176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0410 22:36:17.698442   52176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0410 22:36:17.712815   52176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:36:17.731481   52176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0410 22:36:17.733657   52176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:36:17.807338   52176 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0410 22:36:17.807386   52176 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:36:17.807441   52176 ssh_runner.go:195] Run: which crictl
	I0410 22:36:17.850440   52176 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0410 22:36:17.850483   52176 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:36:17.850532   52176 ssh_runner.go:195] Run: which crictl
	I0410 22:36:17.891066   52176 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0410 22:36:17.891097   52176 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0410 22:36:17.891116   52176 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0410 22:36:17.891126   52176 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0410 22:36:17.891162   52176 ssh_runner.go:195] Run: which crictl
	I0410 22:36:17.891171   52176 ssh_runner.go:195] Run: which crictl
	I0410 22:36:17.926946   52176 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0410 22:36:17.926994   52176 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0410 22:36:17.927000   52176 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0410 22:36:17.927029   52176 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:36:17.927040   52176 ssh_runner.go:195] Run: which crictl
	I0410 22:36:17.927064   52176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:36:17.927067   52176 ssh_runner.go:195] Run: which crictl
	I0410 22:36:17.927001   52176 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0410 22:36:17.927122   52176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:36:17.927130   52176 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:36:17.927135   52176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0410 22:36:17.927153   52176 ssh_runner.go:195] Run: which crictl
	I0410 22:36:17.927165   52176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0410 22:36:18.020483   52176 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0410 22:36:18.020504   52176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0410 22:36:18.071986   52176 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0410 22:36:18.072095   52176 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0410 22:36:18.072166   52176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:36:18.072265   52176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:36:18.079010   52176 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0410 22:36:18.084533   52176 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0410 22:36:18.129304   52176 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0410 22:36:18.141637   52176 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0410 22:36:18.306894   52176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:36:18.450871   52176 cache_images.go:92] duration metric: took 1.019813327s to LoadCachedImages
	W0410 22:36:18.450956   52176 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0410 22:36:18.450972   52176 kubeadm.go:928] updating node { 192.168.39.180 8443 v1.20.0 crio true true} ...
	I0410 22:36:18.451142   52176 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-407031 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-407031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 22:36:18.451263   52176 ssh_runner.go:195] Run: crio config
	I0410 22:36:18.506598   52176 cni.go:84] Creating CNI manager for ""
	I0410 22:36:18.584085   52176 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:36:18.584106   52176 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 22:36:18.584142   52176 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.180 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-407031 NodeName:kubernetes-upgrade-407031 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0410 22:36:18.584384   52176 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-407031"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 22:36:18.584497   52176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0410 22:36:18.596331   52176 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 22:36:18.596402   52176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 22:36:18.607811   52176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0410 22:36:18.632110   52176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0410 22:36:18.652552   52176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0410 22:36:18.673757   52176 ssh_runner.go:195] Run: grep 192.168.39.180	control-plane.minikube.internal$ /etc/hosts
	I0410 22:36:18.678458   52176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:36:18.693332   52176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:36:18.816691   52176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:36:18.837367   52176 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031 for IP: 192.168.39.180
	I0410 22:36:18.837396   52176 certs.go:194] generating shared ca certs ...
	I0410 22:36:18.837419   52176 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:36:18.837611   52176 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 22:36:18.837673   52176 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 22:36:18.837691   52176 certs.go:256] generating profile certs ...
	I0410 22:36:18.837761   52176 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/client.key
	I0410 22:36:18.837780   52176 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/client.crt with IP's: []
	I0410 22:36:19.043985   52176 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/client.crt ...
	I0410 22:36:19.044027   52176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/client.crt: {Name:mk2d550c8ec6ff64697d3b81ed8103c4ee747c49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:36:19.044276   52176 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/client.key ...
	I0410 22:36:19.044303   52176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/client.key: {Name:mk9f94cdc152edd476dbd1b909b8478752647f78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:36:19.044440   52176 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/apiserver.key.cb16f985
	I0410 22:36:19.044473   52176 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/apiserver.crt.cb16f985 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.180]
	I0410 22:36:19.132634   52176 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/apiserver.crt.cb16f985 ...
	I0410 22:36:19.132664   52176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/apiserver.crt.cb16f985: {Name:mk73a1a4f325eaaade175b83b50d774a46a021d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:36:19.132832   52176 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/apiserver.key.cb16f985 ...
	I0410 22:36:19.132849   52176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/apiserver.key.cb16f985: {Name:mk1a230e0e81f7824608da5d2af95fe4831121cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:36:19.132961   52176 certs.go:381] copying /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/apiserver.crt.cb16f985 -> /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/apiserver.crt
	I0410 22:36:19.133064   52176 certs.go:385] copying /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/apiserver.key.cb16f985 -> /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/apiserver.key
	I0410 22:36:19.133128   52176 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/proxy-client.key
	I0410 22:36:19.133151   52176 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/proxy-client.crt with IP's: []
	I0410 22:36:19.197084   52176 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/proxy-client.crt ...
	I0410 22:36:19.197115   52176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/proxy-client.crt: {Name:mked67da14a7fd6e7707bd406f5be816d7a46a8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:36:19.197274   52176 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/proxy-client.key ...
	I0410 22:36:19.197287   52176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/proxy-client.key: {Name:mk267734d6c28f6e8107e7aa514d7ac6830379f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:36:19.197471   52176 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 22:36:19.197510   52176 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 22:36:19.197520   52176 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 22:36:19.197543   52176 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 22:36:19.197565   52176 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 22:36:19.197587   52176 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 22:36:19.197646   52176 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:36:19.198453   52176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 22:36:19.230202   52176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 22:36:19.260471   52176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 22:36:19.287805   52176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 22:36:19.320623   52176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0410 22:36:19.349782   52176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0410 22:36:19.381624   52176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 22:36:19.409910   52176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0410 22:36:19.439937   52176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 22:36:19.470764   52176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 22:36:19.501099   52176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 22:36:19.536054   52176 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 22:36:19.557468   52176 ssh_runner.go:195] Run: openssl version
	I0410 22:36:19.564473   52176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 22:36:19.577626   52176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 22:36:19.583044   52176 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:36:19.583107   52176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 22:36:19.589886   52176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 22:36:19.602281   52176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 22:36:19.614683   52176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 22:36:19.620465   52176 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:36:19.620559   52176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 22:36:19.627399   52176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 22:36:19.639943   52176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 22:36:19.652879   52176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:36:19.658767   52176 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:36:19.658840   52176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:36:19.665677   52176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 22:36:19.685798   52176 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:36:19.692835   52176 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0410 22:36:19.692921   52176 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-407031 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-407031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:36:19.693038   52176 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 22:36:19.693099   52176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:36:19.762016   52176 cri.go:89] found id: ""
	I0410 22:36:19.762134   52176 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0410 22:36:19.776571   52176 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:36:19.790486   52176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:36:19.810558   52176 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:36:19.810583   52176 kubeadm.go:156] found existing configuration files:
	
	I0410 22:36:19.810638   52176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:36:19.823735   52176 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:36:19.823805   52176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:36:19.837871   52176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:36:19.854269   52176 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:36:19.854349   52176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:36:19.866516   52176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:36:19.879651   52176 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:36:19.879726   52176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:36:19.891888   52176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:36:19.903045   52176 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:36:19.903118   52176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:36:19.915896   52176 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 22:36:20.245476   52176 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 22:38:17.991539   52176 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0410 22:38:17.991765   52176 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0410 22:38:17.992832   52176 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0410 22:38:17.992936   52176 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 22:38:17.993125   52176 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 22:38:17.993382   52176 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 22:38:17.993576   52176 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 22:38:17.993726   52176 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 22:38:17.995887   52176 out.go:204]   - Generating certificates and keys ...
	I0410 22:38:17.995982   52176 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 22:38:17.996069   52176 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 22:38:17.996140   52176 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0410 22:38:17.996202   52176 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0410 22:38:17.996283   52176 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0410 22:38:17.996364   52176 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0410 22:38:17.996464   52176 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0410 22:38:17.996651   52176 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-407031 localhost] and IPs [192.168.39.180 127.0.0.1 ::1]
	I0410 22:38:17.996702   52176 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0410 22:38:17.996870   52176 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-407031 localhost] and IPs [192.168.39.180 127.0.0.1 ::1]
	I0410 22:38:17.996936   52176 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0410 22:38:17.996998   52176 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0410 22:38:17.997039   52176 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0410 22:38:17.997093   52176 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 22:38:17.997144   52176 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 22:38:17.997203   52176 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 22:38:17.997270   52176 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 22:38:17.997323   52176 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 22:38:17.997474   52176 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 22:38:17.997584   52176 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 22:38:17.997624   52176 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 22:38:17.997679   52176 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 22:38:17.999346   52176 out.go:204]   - Booting up control plane ...
	I0410 22:38:17.999420   52176 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 22:38:17.999499   52176 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 22:38:17.999584   52176 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 22:38:17.999690   52176 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 22:38:17.999864   52176 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0410 22:38:17.999935   52176 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0410 22:38:18.000010   52176 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:38:18.000184   52176 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:38:18.000246   52176 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:38:18.000431   52176 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:38:18.000499   52176 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:38:18.000752   52176 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:38:18.000856   52176 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:38:18.001026   52176 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:38:18.001086   52176 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:38:18.001267   52176 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:38:18.001276   52176 kubeadm.go:309] 
	I0410 22:38:18.001329   52176 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0410 22:38:18.001372   52176 kubeadm.go:309] 		timed out waiting for the condition
	I0410 22:38:18.001388   52176 kubeadm.go:309] 
	I0410 22:38:18.001445   52176 kubeadm.go:309] 	This error is likely caused by:
	I0410 22:38:18.001504   52176 kubeadm.go:309] 		- The kubelet is not running
	I0410 22:38:18.001659   52176 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0410 22:38:18.001668   52176 kubeadm.go:309] 
	I0410 22:38:18.001814   52176 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0410 22:38:18.001868   52176 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0410 22:38:18.001918   52176 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0410 22:38:18.001927   52176 kubeadm.go:309] 
	I0410 22:38:18.002082   52176 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0410 22:38:18.002166   52176 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0410 22:38:18.002174   52176 kubeadm.go:309] 
	I0410 22:38:18.002265   52176 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0410 22:38:18.002343   52176 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0410 22:38:18.002424   52176 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0410 22:38:18.002504   52176 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0410 22:38:18.002512   52176 kubeadm.go:309] 
	W0410 22:38:18.002634   52176 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-407031 localhost] and IPs [192.168.39.180 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-407031 localhost] and IPs [192.168.39.180 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-407031 localhost] and IPs [192.168.39.180 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-407031 localhost] and IPs [192.168.39.180 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0410 22:38:18.002672   52176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0410 22:38:19.979797   52176 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.977101819s)
	I0410 22:38:19.979890   52176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:38:19.998165   52176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:38:20.011499   52176 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:38:20.011521   52176 kubeadm.go:156] found existing configuration files:
	
	I0410 22:38:20.011573   52176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:38:20.023973   52176 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:38:20.024029   52176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:38:20.036649   52176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:38:20.049012   52176 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:38:20.049078   52176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:38:20.062101   52176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:38:20.074402   52176 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:38:20.074478   52176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:38:20.087563   52176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:38:20.099910   52176 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:38:20.099972   52176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:38:20.113202   52176 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 22:38:20.382532   52176 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 22:40:16.771162   52176 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0410 22:40:16.771264   52176 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0410 22:40:16.772869   52176 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0410 22:40:16.772925   52176 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 22:40:16.773043   52176 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 22:40:16.773197   52176 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 22:40:16.773297   52176 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 22:40:16.773352   52176 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 22:40:16.775055   52176 out.go:204]   - Generating certificates and keys ...
	I0410 22:40:16.775123   52176 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 22:40:16.775181   52176 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 22:40:16.775259   52176 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0410 22:40:16.775337   52176 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0410 22:40:16.775429   52176 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0410 22:40:16.775526   52176 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0410 22:40:16.775624   52176 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0410 22:40:16.775716   52176 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0410 22:40:16.775832   52176 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0410 22:40:16.775935   52176 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0410 22:40:16.775998   52176 kubeadm.go:309] [certs] Using the existing "sa" key
	I0410 22:40:16.776108   52176 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 22:40:16.776184   52176 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 22:40:16.776276   52176 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 22:40:16.776357   52176 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 22:40:16.776444   52176 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 22:40:16.776586   52176 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 22:40:16.776703   52176 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 22:40:16.776768   52176 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 22:40:16.776863   52176 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 22:40:16.778363   52176 out.go:204]   - Booting up control plane ...
	I0410 22:40:16.778464   52176 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 22:40:16.778549   52176 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 22:40:16.778628   52176 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 22:40:16.778750   52176 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 22:40:16.778932   52176 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0410 22:40:16.778989   52176 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0410 22:40:16.779046   52176 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:40:16.779210   52176 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:40:16.779294   52176 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:40:16.779496   52176 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:40:16.779560   52176 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:40:16.779795   52176 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:40:16.779887   52176 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:40:16.780094   52176 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:40:16.780159   52176 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:40:16.780333   52176 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:40:16.780351   52176 kubeadm.go:309] 
	I0410 22:40:16.780410   52176 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0410 22:40:16.780453   52176 kubeadm.go:309] 		timed out waiting for the condition
	I0410 22:40:16.780462   52176 kubeadm.go:309] 
	I0410 22:40:16.780494   52176 kubeadm.go:309] 	This error is likely caused by:
	I0410 22:40:16.780523   52176 kubeadm.go:309] 		- The kubelet is not running
	I0410 22:40:16.780607   52176 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0410 22:40:16.780620   52176 kubeadm.go:309] 
	I0410 22:40:16.780713   52176 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0410 22:40:16.780753   52176 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0410 22:40:16.780781   52176 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0410 22:40:16.780788   52176 kubeadm.go:309] 
	I0410 22:40:16.780880   52176 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0410 22:40:16.780948   52176 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0410 22:40:16.780954   52176 kubeadm.go:309] 
	I0410 22:40:16.781056   52176 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0410 22:40:16.781135   52176 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0410 22:40:16.781206   52176 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0410 22:40:16.781270   52176 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0410 22:40:16.781277   52176 kubeadm.go:309] 
	I0410 22:40:16.781327   52176 kubeadm.go:393] duration metric: took 3m57.088411354s to StartCluster
	I0410 22:40:16.781375   52176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:40:16.781423   52176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:40:16.834585   52176 cri.go:89] found id: ""
	I0410 22:40:16.834618   52176 logs.go:276] 0 containers: []
	W0410 22:40:16.834628   52176 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:40:16.834635   52176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:40:16.834699   52176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:40:16.876375   52176 cri.go:89] found id: ""
	I0410 22:40:16.876411   52176 logs.go:276] 0 containers: []
	W0410 22:40:16.876422   52176 logs.go:278] No container was found matching "etcd"
	I0410 22:40:16.876430   52176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:40:16.876496   52176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:40:16.915151   52176 cri.go:89] found id: ""
	I0410 22:40:16.915175   52176 logs.go:276] 0 containers: []
	W0410 22:40:16.915181   52176 logs.go:278] No container was found matching "coredns"
	I0410 22:40:16.915187   52176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:40:16.915242   52176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:40:16.954274   52176 cri.go:89] found id: ""
	I0410 22:40:16.954303   52176 logs.go:276] 0 containers: []
	W0410 22:40:16.954311   52176 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:40:16.954321   52176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:40:16.954369   52176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:40:16.997060   52176 cri.go:89] found id: ""
	I0410 22:40:16.997083   52176 logs.go:276] 0 containers: []
	W0410 22:40:16.997091   52176 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:40:16.997097   52176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:40:16.997170   52176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:40:17.040995   52176 cri.go:89] found id: ""
	I0410 22:40:17.041025   52176 logs.go:276] 0 containers: []
	W0410 22:40:17.041036   52176 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:40:17.041044   52176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:40:17.041128   52176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:40:17.077457   52176 cri.go:89] found id: ""
	I0410 22:40:17.077483   52176 logs.go:276] 0 containers: []
	W0410 22:40:17.077491   52176 logs.go:278] No container was found matching "kindnet"
	I0410 22:40:17.077500   52176 logs.go:123] Gathering logs for kubelet ...
	I0410 22:40:17.077511   52176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:40:17.133065   52176 logs.go:123] Gathering logs for dmesg ...
	I0410 22:40:17.133107   52176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:40:17.149191   52176 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:40:17.149219   52176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:40:17.270428   52176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:40:17.270453   52176 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:40:17.270466   52176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:40:17.363881   52176 logs.go:123] Gathering logs for container status ...
	I0410 22:40:17.363918   52176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0410 22:40:17.409406   52176 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0410 22:40:17.409458   52176 out.go:239] * 
	* 
	W0410 22:40:17.409542   52176 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0410 22:40:17.409573   52176 out.go:239] * 
	* 
	W0410 22:40:17.410537   52176 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0410 22:40:17.413756   52176 out.go:177] 
	W0410 22:40:17.415706   52176 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0410 22:40:17.415759   52176 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0410 22:40:17.415777   52176 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0410 22:40:17.417418   52176 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-407031 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-407031
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-407031: (1.513905493s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-407031 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-407031 status --format={{.Host}}: exit status 7 (79.636966ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-407031 --memory=2200 --kubernetes-version=v1.30.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-407031 --memory=2200 --kubernetes-version=v1.30.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (37.619290759s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-407031 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-407031 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-407031 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (101.195066ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-407031] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18610
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0-rc.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-407031
	    minikube start -p kubernetes-upgrade-407031 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4070312 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-407031 --kubernetes-version=v1.30.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-407031 --memory=2200 --kubernetes-version=v1.30.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-407031 --memory=2200 --kubernetes-version=v1.30.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (13.787704815s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-04-10 22:41:10.642912563 +0000 UTC m=+4392.879341853
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-407031 -n kubernetes-upgrade-407031
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-407031 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-407031 logs -n 25: (1.343452488s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| Command |                         Args                          |          Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| stop    | -p NoKubernetes-857710                                | NoKubernetes-857710       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:35 UTC | 10 Apr 24 22:35 UTC |
	| start   | -p NoKubernetes-857710                                | NoKubernetes-857710       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:35 UTC | 10 Apr 24 22:35 UTC |
	|         | --driver=kvm2                                         |                           |         |                |                     |                     |
	|         | --container-runtime=crio                              |                           |         |                |                     |                     |
	| delete  | -p running-upgrade-869202                             | running-upgrade-869202    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:35 UTC | 10 Apr 24 22:35 UTC |
	| start   | -p kubernetes-upgrade-407031                          | kubernetes-upgrade-407031 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:35 UTC |                     |
	|         | --memory=2200                                         |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |                |                     |                     |
	|         | --alsologtostderr                                     |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |                |                     |                     |
	|         | --container-runtime=crio                              |                           |         |                |                     |                     |
	| ssh     | -p NoKubernetes-857710 sudo                           | NoKubernetes-857710       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:35 UTC |                     |
	|         | systemctl is-active --quiet                           |                           |         |                |                     |                     |
	|         | service kubelet                                       |                           |         |                |                     |                     |
	| delete  | -p NoKubernetes-857710                                | NoKubernetes-857710       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:35 UTC | 10 Apr 24 22:35 UTC |
	| start   | -p stopped-upgrade-546741                             | minikube                  | jenkins | v1.26.0        | 10 Apr 24 22:35 UTC | 10 Apr 24 22:37 UTC |
	|         | --memory=2200 --vm-driver=kvm2                        |                           |         |                |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |                |                     |                     |
	| start   | -p pause-262675                                       | pause-262675              | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:36 UTC | 10 Apr 24 22:37 UTC |
	|         | --alsologtostderr                                     |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |                |                     |                     |
	|         | --container-runtime=crio                              |                           |         |                |                     |                     |
	| start   | -p cert-expiration-464519                             | cert-expiration-464519    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:37 UTC |                     |
	|         | --memory=2048                                         |                           |         |                |                     |                     |
	|         | --cert-expiration=8760h                               |                           |         |                |                     |                     |
	|         | --driver=kvm2                                         |                           |         |                |                     |                     |
	|         | --container-runtime=crio                              |                           |         |                |                     |                     |
	| stop    | stopped-upgrade-546741 stop                           | minikube                  | jenkins | v1.26.0        | 10 Apr 24 22:37 UTC | 10 Apr 24 22:37 UTC |
	| start   | -p stopped-upgrade-546741                             | stopped-upgrade-546741    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:37 UTC | 10 Apr 24 22:37 UTC |
	|         | --memory=2200                                         |                           |         |                |                     |                     |
	|         | --alsologtostderr                                     |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |                |                     |                     |
	|         | --container-runtime=crio                              |                           |         |                |                     |                     |
	| delete  | -p pause-262675                                       | pause-262675              | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:37 UTC | 10 Apr 24 22:37 UTC |
	| start   | -p cert-options-849843                                | cert-options-849843       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:37 UTC | 10 Apr 24 22:38 UTC |
	|         | --memory=2048                                         |                           |         |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1                             |                           |         |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15                         |                           |         |                |                     |                     |
	|         | --apiserver-names=localhost                           |                           |         |                |                     |                     |
	|         | --apiserver-names=www.google.com                      |                           |         |                |                     |                     |
	|         | --apiserver-port=8555                                 |                           |         |                |                     |                     |
	|         | --driver=kvm2                                         |                           |         |                |                     |                     |
	|         | --container-runtime=crio                              |                           |         |                |                     |                     |
	| delete  | -p stopped-upgrade-546741                             | stopped-upgrade-546741    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:37 UTC | 10 Apr 24 22:37 UTC |
	| start   | -p old-k8s-version-862528                             | old-k8s-version-862528    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:37 UTC |                     |
	|         | --memory=2200                                         |                           |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |                |                     |                     |
	|         | --kvm-network=default                                 |                           |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                         |                           |         |                |                     |                     |
	|         | --disable-driver-mounts                               |                           |         |                |                     |                     |
	|         | --keep-context=false                                  |                           |         |                |                     |                     |
	|         | --driver=kvm2                                         |                           |         |                |                     |                     |
	|         | --container-runtime=crio                              |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |                |                     |                     |
	| ssh     | cert-options-849843 ssh                               | cert-options-849843       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:38 UTC | 10 Apr 24 22:38 UTC |
	|         | openssl x509 -text -noout -in                         |                           |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                 |                           |         |                |                     |                     |
	| ssh     | -p cert-options-849843 -- sudo                        | cert-options-849843       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:38 UTC | 10 Apr 24 22:38 UTC |
	|         | cat /etc/kubernetes/admin.conf                        |                           |         |                |                     |                     |
	| delete  | -p cert-options-849843                                | cert-options-849843       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:38 UTC | 10 Apr 24 22:38 UTC |
	| start   | -p no-preload-646133                                  | no-preload-646133         | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:38 UTC | 10 Apr 24 22:40 UTC |
	|         | --memory=2200 --alsologtostderr                       |                           |         |                |                     |                     |
	|         | --wait=true --preload=false                           |                           |         |                |                     |                     |
	|         | --driver=kvm2                                         |                           |         |                |                     |                     |
	|         | --container-runtime=crio                              |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.1                     |                           |         |                |                     |                     |
	| stop    | -p kubernetes-upgrade-407031                          | kubernetes-upgrade-407031 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC | 10 Apr 24 22:40 UTC |
	| start   | -p kubernetes-upgrade-407031                          | kubernetes-upgrade-407031 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC | 10 Apr 24 22:40 UTC |
	|         | --memory=2200                                         |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.1                     |                           |         |                |                     |                     |
	|         | --alsologtostderr                                     |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |                |                     |                     |
	|         | --container-runtime=crio                              |                           |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-646133            | no-preload-646133         | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC | 10 Apr 24 22:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                           |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                           |         |                |                     |                     |
	| stop    | -p no-preload-646133                                  | no-preload-646133         | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC |                     |
	|         | --alsologtostderr -v=3                                |                           |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-407031                          | kubernetes-upgrade-407031 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC |                     |
	|         | --memory=2200                                         |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |                |                     |                     |
	|         | --driver=kvm2                                         |                           |         |                |                     |                     |
	|         | --container-runtime=crio                              |                           |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-407031                          | kubernetes-upgrade-407031 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC | 10 Apr 24 22:41 UTC |
	|         | --memory=2200                                         |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.1                     |                           |         |                |                     |                     |
	|         | --alsologtostderr                                     |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |                |                     |                     |
	|         | --container-runtime=crio                              |                           |         |                |                     |                     |
	|---------|-------------------------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/10 22:40:56
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0410 22:40:56.919090   55708 out.go:291] Setting OutFile to fd 1 ...
	I0410 22:40:56.919225   55708 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:40:56.919236   55708 out.go:304] Setting ErrFile to fd 2...
	I0410 22:40:56.919242   55708 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:40:56.919442   55708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 22:40:56.920011   55708 out.go:298] Setting JSON to false
	I0410 22:40:56.920963   55708 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4999,"bootTime":1712783858,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0410 22:40:56.921028   55708 start.go:139] virtualization: kvm guest
	I0410 22:40:56.924620   55708 out.go:177] * [kubernetes-upgrade-407031] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0410 22:40:56.926536   55708 out.go:177]   - MINIKUBE_LOCATION=18610
	I0410 22:40:56.926453   55708 notify.go:220] Checking for updates...
	I0410 22:40:56.928184   55708 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0410 22:40:56.929533   55708 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:40:56.931096   55708 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 22:40:56.932665   55708 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0410 22:40:56.934187   55708 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0410 22:40:56.936157   55708 config.go:182] Loaded profile config "kubernetes-upgrade-407031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.1
	I0410 22:40:56.936809   55708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:40:56.936867   55708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:40:56.952138   55708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40051
	I0410 22:40:56.952608   55708 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:40:56.953156   55708 main.go:141] libmachine: Using API Version  1
	I0410 22:40:56.953176   55708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:40:56.953515   55708 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:40:56.953752   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .DriverName
	I0410 22:40:56.954164   55708 driver.go:392] Setting default libvirt URI to qemu:///system
	I0410 22:40:56.954577   55708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:40:56.954637   55708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:40:56.970024   55708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34663
	I0410 22:40:56.970537   55708 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:40:56.971184   55708 main.go:141] libmachine: Using API Version  1
	I0410 22:40:56.971206   55708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:40:56.971575   55708 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:40:56.971791   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .DriverName
	I0410 22:40:57.008477   55708 out.go:177] * Using the kvm2 driver based on existing profile
	I0410 22:40:57.010241   55708 start.go:297] selected driver: kvm2
	I0410 22:40:57.010259   55708 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-407031 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.0-rc.1 ClusterName:kubernetes-upgrade-407031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:40:57.010393   55708 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0410 22:40:57.011092   55708 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 22:40:57.011162   55708 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18610-5679/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0410 22:40:57.027169   55708 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0410 22:40:57.027547   55708 cni.go:84] Creating CNI manager for ""
	I0410 22:40:57.027567   55708 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:40:57.027611   55708 start.go:340] cluster config:
	{Name:kubernetes-upgrade-407031 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.1 ClusterName:kubernetes-upgrade-407031 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:40:57.027709   55708 iso.go:125] acquiring lock: {Name:mk5a268a8a4dbf02629446a098c1de60574dce10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 22:40:57.029767   55708 out.go:177] * Starting "kubernetes-upgrade-407031" primary control-plane node in "kubernetes-upgrade-407031" cluster
	I0410 22:40:57.031234   55708 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.1 and runtime crio
	I0410 22:40:57.031274   55708 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I0410 22:40:57.031281   55708 cache.go:56] Caching tarball of preloaded images
	I0410 22:40:57.031368   55708 preload.go:173] Found /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0410 22:40:57.031382   55708 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.1 on crio
	I0410 22:40:57.031485   55708 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/config.json ...
	I0410 22:40:57.031683   55708 start.go:360] acquireMachinesLock for kubernetes-upgrade-407031: {Name:mkcfcfa8b01b63726303cd37d33f01d452118635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0410 22:40:57.031777   55708 start.go:364] duration metric: took 41.088µs to acquireMachinesLock for "kubernetes-upgrade-407031"
	I0410 22:40:57.031801   55708 start.go:96] Skipping create...Using existing machine configuration
	I0410 22:40:57.031817   55708 fix.go:54] fixHost starting: 
	I0410 22:40:57.032195   55708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:40:57.032234   55708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:40:57.046842   55708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33431
	I0410 22:40:57.047272   55708 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:40:57.047760   55708 main.go:141] libmachine: Using API Version  1
	I0410 22:40:57.047783   55708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:40:57.048150   55708 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:40:57.048349   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .DriverName
	I0410 22:40:57.048518   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetState
	I0410 22:40:57.050415   55708 fix.go:112] recreateIfNeeded on kubernetes-upgrade-407031: state=Running err=<nil>
	W0410 22:40:57.050440   55708 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 22:40:57.052447   55708 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-407031" VM ...
	I0410 22:40:53.316024   53086 logs.go:123] Gathering logs for kube-proxy [0f5bc9f17317dd7f3400e901c1e7689d65ae55c3efd64f1fcc4914a626b37444] ...
	I0410 22:40:53.316044   53086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f5bc9f17317dd7f3400e901c1e7689d65ae55c3efd64f1fcc4914a626b37444"
	I0410 22:40:53.353806   53086 logs.go:123] Gathering logs for container status ...
	I0410 22:40:53.353825   53086 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:40:53.398500   53086 logs.go:123] Gathering logs for kubelet ...
	I0410 22:40:53.398515   53086 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:40:53.500040   53086 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:40:53.500058   53086 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:40:53.577988   53086 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:40:53.578000   53086 logs.go:123] Gathering logs for kube-apiserver [0770842f06e149a8200ec7842f5889a8e6d2fec0120a52b9ec6db6062b35c4fb] ...
	I0410 22:40:53.578011   53086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0770842f06e149a8200ec7842f5889a8e6d2fec0120a52b9ec6db6062b35c4fb"
	I0410 22:40:56.118743   53086 api_server.go:253] Checking apiserver healthz at https://192.168.72.34:8443/healthz ...
	I0410 22:40:56.119358   53086 api_server.go:269] stopped: https://192.168.72.34:8443/healthz: Get "https://192.168.72.34:8443/healthz": dial tcp 192.168.72.34:8443: connect: connection refused
	I0410 22:40:56.119394   53086 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:40:56.119435   53086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:40:56.160267   53086 cri.go:89] found id: "0770842f06e149a8200ec7842f5889a8e6d2fec0120a52b9ec6db6062b35c4fb"
	I0410 22:40:56.160281   53086 cri.go:89] found id: ""
	I0410 22:40:56.160289   53086 logs.go:276] 1 containers: [0770842f06e149a8200ec7842f5889a8e6d2fec0120a52b9ec6db6062b35c4fb]
	I0410 22:40:56.160340   53086 ssh_runner.go:195] Run: which crictl
	I0410 22:40:56.165012   53086 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:40:56.165069   53086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:40:56.206196   53086 cri.go:89] found id: "cc30c01d0a590683bb86a56f83d5a0dd50d0290d576b3416d874d7e92a77700b"
	I0410 22:40:56.206208   53086 cri.go:89] found id: ""
	I0410 22:40:56.206214   53086 logs.go:276] 1 containers: [cc30c01d0a590683bb86a56f83d5a0dd50d0290d576b3416d874d7e92a77700b]
	I0410 22:40:56.206260   53086 ssh_runner.go:195] Run: which crictl
	I0410 22:40:56.212337   53086 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:40:56.212411   53086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:40:56.250661   53086 cri.go:89] found id: "ad3b07623fa466fde13ebee2fa02cd44de73902c33b30e13dd80d3124724c780"
	I0410 22:40:56.250675   53086 cri.go:89] found id: ""
	I0410 22:40:56.250682   53086 logs.go:276] 1 containers: [ad3b07623fa466fde13ebee2fa02cd44de73902c33b30e13dd80d3124724c780]
	I0410 22:40:56.250738   53086 ssh_runner.go:195] Run: which crictl
	I0410 22:40:56.255287   53086 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:40:56.255350   53086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:40:56.317696   53086 cri.go:89] found id: "f3ac4ec3826744c66e06c8e3619f74dd754625d0f296d7dcb89ae67b35c68959"
	I0410 22:40:56.317709   53086 cri.go:89] found id: ""
	I0410 22:40:56.317717   53086 logs.go:276] 1 containers: [f3ac4ec3826744c66e06c8e3619f74dd754625d0f296d7dcb89ae67b35c68959]
	I0410 22:40:56.317771   53086 ssh_runner.go:195] Run: which crictl
	I0410 22:40:56.322303   53086 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:40:56.322366   53086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:40:56.371066   53086 cri.go:89] found id: "0f5bc9f17317dd7f3400e901c1e7689d65ae55c3efd64f1fcc4914a626b37444"
	I0410 22:40:56.371078   53086 cri.go:89] found id: ""
	I0410 22:40:56.371085   53086 logs.go:276] 1 containers: [0f5bc9f17317dd7f3400e901c1e7689d65ae55c3efd64f1fcc4914a626b37444]
	I0410 22:40:56.371148   53086 ssh_runner.go:195] Run: which crictl
	I0410 22:40:56.376140   53086 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:40:56.376213   53086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:40:56.421347   53086 cri.go:89] found id: "469ea2e69b4d70ac81e7fa12af6a8088d9f5c72759e8bd0a91027d688bbc8861"
	I0410 22:40:56.421362   53086 cri.go:89] found id: ""
	I0410 22:40:56.421370   53086 logs.go:276] 1 containers: [469ea2e69b4d70ac81e7fa12af6a8088d9f5c72759e8bd0a91027d688bbc8861]
	I0410 22:40:56.421442   53086 ssh_runner.go:195] Run: which crictl
	I0410 22:40:56.426389   53086 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:40:56.426451   53086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:40:56.478687   53086 cri.go:89] found id: ""
	I0410 22:40:56.478704   53086 logs.go:276] 0 containers: []
	W0410 22:40:56.478713   53086 logs.go:278] No container was found matching "kindnet"
	I0410 22:40:56.478720   53086 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0410 22:40:56.478781   53086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0410 22:40:56.522281   53086 cri.go:89] found id: "1752300ac4f2a2e40793a8958733d491dc01c88aedcfc0df7aec5395b825d57a"
	I0410 22:40:56.522296   53086 cri.go:89] found id: ""
	I0410 22:40:56.522303   53086 logs.go:276] 1 containers: [1752300ac4f2a2e40793a8958733d491dc01c88aedcfc0df7aec5395b825d57a]
	I0410 22:40:56.522366   53086 ssh_runner.go:195] Run: which crictl
	I0410 22:40:56.527054   53086 logs.go:123] Gathering logs for kubelet ...
	I0410 22:40:56.527070   53086 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:40:56.625112   53086 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:40:56.625137   53086 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:40:56.709899   53086 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:40:56.709910   53086 logs.go:123] Gathering logs for kube-apiserver [0770842f06e149a8200ec7842f5889a8e6d2fec0120a52b9ec6db6062b35c4fb] ...
	I0410 22:40:56.709921   53086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0770842f06e149a8200ec7842f5889a8e6d2fec0120a52b9ec6db6062b35c4fb"
	I0410 22:40:56.757694   53086 logs.go:123] Gathering logs for storage-provisioner [1752300ac4f2a2e40793a8958733d491dc01c88aedcfc0df7aec5395b825d57a] ...
	I0410 22:40:56.757716   53086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1752300ac4f2a2e40793a8958733d491dc01c88aedcfc0df7aec5395b825d57a"
	I0410 22:40:56.800647   53086 logs.go:123] Gathering logs for container status ...
	I0410 22:40:56.800665   53086 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:40:56.854846   53086 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:40:56.854865   53086 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:40:57.172237   53086 logs.go:123] Gathering logs for dmesg ...
	I0410 22:40:57.172266   53086 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:40:57.190058   53086 logs.go:123] Gathering logs for etcd [cc30c01d0a590683bb86a56f83d5a0dd50d0290d576b3416d874d7e92a77700b] ...
	I0410 22:40:57.190075   53086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc30c01d0a590683bb86a56f83d5a0dd50d0290d576b3416d874d7e92a77700b"
	I0410 22:40:57.246484   53086 logs.go:123] Gathering logs for coredns [ad3b07623fa466fde13ebee2fa02cd44de73902c33b30e13dd80d3124724c780] ...
	I0410 22:40:57.246503   53086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad3b07623fa466fde13ebee2fa02cd44de73902c33b30e13dd80d3124724c780"
	I0410 22:40:57.290793   53086 logs.go:123] Gathering logs for kube-scheduler [f3ac4ec3826744c66e06c8e3619f74dd754625d0f296d7dcb89ae67b35c68959] ...
	I0410 22:40:57.290809   53086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3ac4ec3826744c66e06c8e3619f74dd754625d0f296d7dcb89ae67b35c68959"
	I0410 22:40:57.339353   53086 logs.go:123] Gathering logs for kube-proxy [0f5bc9f17317dd7f3400e901c1e7689d65ae55c3efd64f1fcc4914a626b37444] ...
	I0410 22:40:57.339370   53086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f5bc9f17317dd7f3400e901c1e7689d65ae55c3efd64f1fcc4914a626b37444"
	I0410 22:40:57.385793   53086 logs.go:123] Gathering logs for kube-controller-manager [469ea2e69b4d70ac81e7fa12af6a8088d9f5c72759e8bd0a91027d688bbc8861] ...
	I0410 22:40:57.385814   53086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 469ea2e69b4d70ac81e7fa12af6a8088d9f5c72759e8bd0a91027d688bbc8861"
	I0410 22:40:57.054036   55708 machine.go:94] provisionDockerMachine start ...
	I0410 22:40:57.054062   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .DriverName
	I0410 22:40:57.054320   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHHostname
	I0410 22:40:57.056927   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:40:57.057372   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:0f:38", ip: ""} in network mk-kubernetes-upgrade-407031: {Iface:virbr3 ExpiryTime:2024-04-10 23:35:58 +0000 UTC Type:0 Mac:52:54:00:f3:0f:38 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:kubernetes-upgrade-407031 Clientid:01:52:54:00:f3:0f:38}
	I0410 22:40:57.057399   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined IP address 192.168.39.180 and MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:40:57.057535   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHPort
	I0410 22:40:57.057762   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHKeyPath
	I0410 22:40:57.057918   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHKeyPath
	I0410 22:40:57.058061   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHUsername
	I0410 22:40:57.058235   55708 main.go:141] libmachine: Using SSH client type: native
	I0410 22:40:57.058470   55708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0410 22:40:57.058486   55708 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 22:40:57.178021   55708 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-407031
	
	I0410 22:40:57.178050   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetMachineName
	I0410 22:40:57.178271   55708 buildroot.go:166] provisioning hostname "kubernetes-upgrade-407031"
	I0410 22:40:57.178303   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetMachineName
	I0410 22:40:57.178694   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHHostname
	I0410 22:40:57.182036   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:40:57.182474   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:0f:38", ip: ""} in network mk-kubernetes-upgrade-407031: {Iface:virbr3 ExpiryTime:2024-04-10 23:35:58 +0000 UTC Type:0 Mac:52:54:00:f3:0f:38 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:kubernetes-upgrade-407031 Clientid:01:52:54:00:f3:0f:38}
	I0410 22:40:57.182515   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined IP address 192.168.39.180 and MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:40:57.182676   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHPort
	I0410 22:40:57.182904   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHKeyPath
	I0410 22:40:57.183044   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHKeyPath
	I0410 22:40:57.183189   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHUsername
	I0410 22:40:57.183333   55708 main.go:141] libmachine: Using SSH client type: native
	I0410 22:40:57.183561   55708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0410 22:40:57.183578   55708 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-407031 && echo "kubernetes-upgrade-407031" | sudo tee /etc/hostname
	I0410 22:40:57.313417   55708 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-407031
	
	I0410 22:40:57.313461   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHHostname
	I0410 22:40:57.316625   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:40:57.317011   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:0f:38", ip: ""} in network mk-kubernetes-upgrade-407031: {Iface:virbr3 ExpiryTime:2024-04-10 23:35:58 +0000 UTC Type:0 Mac:52:54:00:f3:0f:38 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:kubernetes-upgrade-407031 Clientid:01:52:54:00:f3:0f:38}
	I0410 22:40:57.317046   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined IP address 192.168.39.180 and MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:40:57.317251   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHPort
	I0410 22:40:57.317447   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHKeyPath
	I0410 22:40:57.317592   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHKeyPath
	I0410 22:40:57.317739   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHUsername
	I0410 22:40:57.318027   55708 main.go:141] libmachine: Using SSH client type: native
	I0410 22:40:57.318192   55708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0410 22:40:57.318212   55708 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-407031' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-407031/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-407031' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:40:57.426509   55708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:40:57.426536   55708 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:40:57.426570   55708 buildroot.go:174] setting up certificates
	I0410 22:40:57.426580   55708 provision.go:84] configureAuth start
	I0410 22:40:57.426588   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetMachineName
	I0410 22:40:57.426881   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetIP
	I0410 22:40:57.429561   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:40:57.429960   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:0f:38", ip: ""} in network mk-kubernetes-upgrade-407031: {Iface:virbr3 ExpiryTime:2024-04-10 23:35:58 +0000 UTC Type:0 Mac:52:54:00:f3:0f:38 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:kubernetes-upgrade-407031 Clientid:01:52:54:00:f3:0f:38}
	I0410 22:40:57.429994   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined IP address 192.168.39.180 and MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:40:57.430128   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHHostname
	I0410 22:40:57.432367   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:40:57.432799   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:0f:38", ip: ""} in network mk-kubernetes-upgrade-407031: {Iface:virbr3 ExpiryTime:2024-04-10 23:35:58 +0000 UTC Type:0 Mac:52:54:00:f3:0f:38 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:kubernetes-upgrade-407031 Clientid:01:52:54:00:f3:0f:38}
	I0410 22:40:57.432819   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined IP address 192.168.39.180 and MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:40:57.432938   55708 provision.go:143] copyHostCerts
	I0410 22:40:57.433001   55708 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:40:57.433022   55708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:40:57.433095   55708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:40:57.433235   55708 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:40:57.433249   55708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:40:57.433284   55708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:40:57.433397   55708 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:40:57.433409   55708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:40:57.433451   55708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:40:57.433530   55708 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-407031 san=[127.0.0.1 192.168.39.180 kubernetes-upgrade-407031 localhost minikube]
	I0410 22:40:57.499567   55708 provision.go:177] copyRemoteCerts
	I0410 22:40:57.499637   55708 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:40:57.499664   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHHostname
	I0410 22:40:57.503255   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:40:57.503671   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:0f:38", ip: ""} in network mk-kubernetes-upgrade-407031: {Iface:virbr3 ExpiryTime:2024-04-10 23:35:58 +0000 UTC Type:0 Mac:52:54:00:f3:0f:38 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:kubernetes-upgrade-407031 Clientid:01:52:54:00:f3:0f:38}
	I0410 22:40:57.503697   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined IP address 192.168.39.180 and MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:40:57.504017   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHPort
	I0410 22:40:57.504236   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHKeyPath
	I0410 22:40:57.504440   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHUsername
	I0410 22:40:57.504630   55708 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/kubernetes-upgrade-407031/id_rsa Username:docker}
	I0410 22:40:57.588532   55708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:40:57.620727   55708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0410 22:40:57.648168   55708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0410 22:40:57.681568   55708 provision.go:87] duration metric: took 254.953941ms to configureAuth
	I0410 22:40:57.681608   55708 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:40:57.681791   55708 config.go:182] Loaded profile config "kubernetes-upgrade-407031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.1
	I0410 22:40:57.681871   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHHostname
	I0410 22:40:57.684359   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:40:57.684742   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:0f:38", ip: ""} in network mk-kubernetes-upgrade-407031: {Iface:virbr3 ExpiryTime:2024-04-10 23:35:58 +0000 UTC Type:0 Mac:52:54:00:f3:0f:38 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:kubernetes-upgrade-407031 Clientid:01:52:54:00:f3:0f:38}
	I0410 22:40:57.684778   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined IP address 192.168.39.180 and MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:40:57.684964   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHPort
	I0410 22:40:57.685168   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHKeyPath
	I0410 22:40:57.685335   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHKeyPath
	I0410 22:40:57.685554   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHUsername
	I0410 22:40:57.685729   55708 main.go:141] libmachine: Using SSH client type: native
	I0410 22:40:57.685902   55708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0410 22:40:57.685917   55708 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:40:58.551576   55708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:40:58.551600   55708 machine.go:97] duration metric: took 1.497548968s to provisionDockerMachine
	I0410 22:40:58.551611   55708 start.go:293] postStartSetup for "kubernetes-upgrade-407031" (driver="kvm2")
	I0410 22:40:58.551628   55708 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:40:58.551652   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .DriverName
	I0410 22:40:58.551945   55708 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:40:58.551983   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHHostname
	I0410 22:40:58.554627   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:40:58.555027   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:0f:38", ip: ""} in network mk-kubernetes-upgrade-407031: {Iface:virbr3 ExpiryTime:2024-04-10 23:35:58 +0000 UTC Type:0 Mac:52:54:00:f3:0f:38 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:kubernetes-upgrade-407031 Clientid:01:52:54:00:f3:0f:38}
	I0410 22:40:58.555055   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined IP address 192.168.39.180 and MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:40:58.555163   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHPort
	I0410 22:40:58.555369   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHKeyPath
	I0410 22:40:58.555508   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHUsername
	I0410 22:40:58.555665   55708 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/kubernetes-upgrade-407031/id_rsa Username:docker}
	I0410 22:40:58.640436   55708 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:40:58.645347   55708 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:40:58.645374   55708 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:40:58.645454   55708 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:40:58.645572   55708 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:40:58.645699   55708 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:40:58.655721   55708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:40:58.682883   55708 start.go:296] duration metric: took 131.260123ms for postStartSetup
	I0410 22:40:58.682922   55708 fix.go:56] duration metric: took 1.651110573s for fixHost
	I0410 22:40:58.682946   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHHostname
	I0410 22:40:58.686033   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:40:58.686428   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:0f:38", ip: ""} in network mk-kubernetes-upgrade-407031: {Iface:virbr3 ExpiryTime:2024-04-10 23:35:58 +0000 UTC Type:0 Mac:52:54:00:f3:0f:38 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:kubernetes-upgrade-407031 Clientid:01:52:54:00:f3:0f:38}
	I0410 22:40:58.686461   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined IP address 192.168.39.180 and MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:40:58.686617   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHPort
	I0410 22:40:58.686850   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHKeyPath
	I0410 22:40:58.687048   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHKeyPath
	I0410 22:40:58.687202   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHUsername
	I0410 22:40:58.687429   55708 main.go:141] libmachine: Using SSH client type: native
	I0410 22:40:58.687643   55708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0410 22:40:58.687661   55708 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 22:40:58.789907   55708 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712788858.778547216
	
	I0410 22:40:58.789931   55708 fix.go:216] guest clock: 1712788858.778547216
	I0410 22:40:58.789939   55708 fix.go:229] Guest: 2024-04-10 22:40:58.778547216 +0000 UTC Remote: 2024-04-10 22:40:58.68292753 +0000 UTC m=+1.824406093 (delta=95.619686ms)
	I0410 22:40:58.789981   55708 fix.go:200] guest clock delta is within tolerance: 95.619686ms
	I0410 22:40:58.789987   55708 start.go:83] releasing machines lock for "kubernetes-upgrade-407031", held for 1.758195377s
	I0410 22:40:58.790005   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .DriverName
	I0410 22:40:58.790290   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetIP
	I0410 22:40:58.793458   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:40:58.793848   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:0f:38", ip: ""} in network mk-kubernetes-upgrade-407031: {Iface:virbr3 ExpiryTime:2024-04-10 23:35:58 +0000 UTC Type:0 Mac:52:54:00:f3:0f:38 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:kubernetes-upgrade-407031 Clientid:01:52:54:00:f3:0f:38}
	I0410 22:40:58.793885   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined IP address 192.168.39.180 and MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:40:58.794072   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .DriverName
	I0410 22:40:58.794645   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .DriverName
	I0410 22:40:58.794851   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .DriverName
	I0410 22:40:58.794906   55708 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:40:58.794948   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHHostname
	I0410 22:40:58.795102   55708 ssh_runner.go:195] Run: cat /version.json
	I0410 22:40:58.795127   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHHostname
	I0410 22:40:58.797778   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:40:58.798121   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:0f:38", ip: ""} in network mk-kubernetes-upgrade-407031: {Iface:virbr3 ExpiryTime:2024-04-10 23:35:58 +0000 UTC Type:0 Mac:52:54:00:f3:0f:38 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:kubernetes-upgrade-407031 Clientid:01:52:54:00:f3:0f:38}
	I0410 22:40:58.798150   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined IP address 192.168.39.180 and MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:40:58.798198   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:40:58.798300   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHPort
	I0410 22:40:58.798538   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHKeyPath
	I0410 22:40:58.798626   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:0f:38", ip: ""} in network mk-kubernetes-upgrade-407031: {Iface:virbr3 ExpiryTime:2024-04-10 23:35:58 +0000 UTC Type:0 Mac:52:54:00:f3:0f:38 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:kubernetes-upgrade-407031 Clientid:01:52:54:00:f3:0f:38}
	I0410 22:40:58.798665   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined IP address 192.168.39.180 and MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:40:58.798711   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHUsername
	I0410 22:40:58.798814   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHPort
	I0410 22:40:58.798888   55708 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/kubernetes-upgrade-407031/id_rsa Username:docker}
	I0410 22:40:58.798973   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHKeyPath
	I0410 22:40:58.799121   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetSSHUsername
	I0410 22:40:58.799269   55708 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/kubernetes-upgrade-407031/id_rsa Username:docker}
	I0410 22:40:58.905668   55708 ssh_runner.go:195] Run: systemctl --version
	I0410 22:40:58.912435   55708 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:40:59.168378   55708 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 22:40:59.202102   55708 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:40:59.202163   55708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:40:59.240119   55708 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0410 22:40:59.240146   55708 start.go:494] detecting cgroup driver to use...
	I0410 22:40:59.240208   55708 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:40:59.281415   55708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:40:59.348895   55708 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:40:59.348964   55708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:40:59.382619   55708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:40:59.405085   55708 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:40:59.639368   55708 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:40:59.852647   55708 docker.go:233] disabling docker service ...
	I0410 22:40:59.852736   55708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:40:59.872833   55708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:40:59.895747   55708 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:41:00.104498   55708 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:41:00.310342   55708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:41:00.327155   55708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:41:00.350388   55708 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0410 22:41:00.350450   55708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:41:00.361644   55708 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:41:00.361723   55708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:41:00.373254   55708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:41:00.384365   55708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:41:00.395576   55708 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:41:00.407151   55708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:41:00.418577   55708 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:41:00.433158   55708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:41:00.452732   55708 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:41:00.467628   55708 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:41:00.481655   55708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:41:00.679830   55708 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:41:01.111354   55708 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 22:41:01.111436   55708 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 22:41:01.119190   55708 start.go:562] Will wait 60s for crictl version
	I0410 22:41:01.119241   55708 ssh_runner.go:195] Run: which crictl
	I0410 22:41:01.124178   55708 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 22:41:01.209226   55708 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 22:41:01.209309   55708 ssh_runner.go:195] Run: crio --version
	I0410 22:41:01.354113   55708 ssh_runner.go:195] Run: crio --version
	I0410 22:41:01.466813   55708 out.go:177] * Preparing Kubernetes v1.30.0-rc.1 on CRI-O 1.29.1 ...
	I0410 22:41:01.469101   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) Calling .GetIP
	I0410 22:41:01.472014   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:41:01.472387   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:0f:38", ip: ""} in network mk-kubernetes-upgrade-407031: {Iface:virbr3 ExpiryTime:2024-04-10 23:35:58 +0000 UTC Type:0 Mac:52:54:00:f3:0f:38 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:kubernetes-upgrade-407031 Clientid:01:52:54:00:f3:0f:38}
	I0410 22:41:01.472444   55708 main.go:141] libmachine: (kubernetes-upgrade-407031) DBG | domain kubernetes-upgrade-407031 has defined IP address 192.168.39.180 and MAC address 52:54:00:f3:0f:38 in network mk-kubernetes-upgrade-407031
	I0410 22:41:01.472667   55708 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0410 22:41:01.478558   55708 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-407031 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0-rc.1 ClusterName:kubernetes-upgrade-407031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 22:41:01.478650   55708 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.1 and runtime crio
	I0410 22:41:01.478690   55708 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:41:01.527273   55708 crio.go:514] all images are preloaded for cri-o runtime.
	I0410 22:41:01.527295   55708 crio.go:433] Images already preloaded, skipping extraction
	I0410 22:41:01.527340   55708 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:41:01.573661   55708 crio.go:514] all images are preloaded for cri-o runtime.
	I0410 22:41:01.573685   55708 cache_images.go:84] Images are preloaded, skipping loading
	I0410 22:41:01.573692   55708 kubeadm.go:928] updating node { 192.168.39.180 8443 v1.30.0-rc.1 crio true true} ...
	I0410 22:41:01.573790   55708 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-407031 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.1 ClusterName:kubernetes-upgrade-407031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 22:41:01.573852   55708 ssh_runner.go:195] Run: crio config
	I0410 22:41:01.631745   55708 cni.go:84] Creating CNI manager for ""
	I0410 22:41:01.631767   55708 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:41:01.631776   55708 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 22:41:01.631798   55708 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.180 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-407031 NodeName:kubernetes-upgrade-407031 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0410 22:41:01.631952   55708 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-407031"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 22:41:01.632007   55708 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.1
	I0410 22:41:01.643961   55708 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 22:41:01.644030   55708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 22:41:01.656218   55708 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (330 bytes)
	I0410 22:41:01.675971   55708 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0410 22:41:01.695285   55708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0410 22:41:01.713689   55708 ssh_runner.go:195] Run: grep 192.168.39.180	control-plane.minikube.internal$ /etc/hosts
	I0410 22:41:01.718526   55708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:41:01.855739   55708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:41:01.875554   55708 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031 for IP: 192.168.39.180
	I0410 22:41:01.875581   55708 certs.go:194] generating shared ca certs ...
	I0410 22:41:01.875596   55708 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:41:01.875736   55708 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 22:41:01.875783   55708 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 22:41:01.875790   55708 certs.go:256] generating profile certs ...
	I0410 22:41:01.875857   55708 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/client.key
	I0410 22:41:01.875899   55708 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/apiserver.key.cb16f985
	I0410 22:41:01.875933   55708 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/proxy-client.key
	I0410 22:41:01.876033   55708 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 22:41:01.876059   55708 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 22:41:01.876069   55708 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 22:41:01.876087   55708 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 22:41:01.876108   55708 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 22:41:01.876131   55708 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 22:41:01.876166   55708 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:41:01.876708   55708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 22:41:01.904653   55708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 22:40:59.925982   53086 api_server.go:253] Checking apiserver healthz at https://192.168.72.34:8443/healthz ...
	I0410 22:40:59.926734   53086 api_server.go:269] stopped: https://192.168.72.34:8443/healthz: Get "https://192.168.72.34:8443/healthz": dial tcp 192.168.72.34:8443: connect: connection refused
	I0410 22:40:59.926793   53086 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:40:59.926847   53086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:40:59.969155   53086 cri.go:89] found id: "0770842f06e149a8200ec7842f5889a8e6d2fec0120a52b9ec6db6062b35c4fb"
	I0410 22:40:59.969168   53086 cri.go:89] found id: ""
	I0410 22:40:59.969175   53086 logs.go:276] 1 containers: [0770842f06e149a8200ec7842f5889a8e6d2fec0120a52b9ec6db6062b35c4fb]
	I0410 22:40:59.969220   53086 ssh_runner.go:195] Run: which crictl
	I0410 22:40:59.973796   53086 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:40:59.973852   53086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:41:00.026640   53086 cri.go:89] found id: "cc30c01d0a590683bb86a56f83d5a0dd50d0290d576b3416d874d7e92a77700b"
	I0410 22:41:00.026654   53086 cri.go:89] found id: ""
	I0410 22:41:00.026667   53086 logs.go:276] 1 containers: [cc30c01d0a590683bb86a56f83d5a0dd50d0290d576b3416d874d7e92a77700b]
	I0410 22:41:00.026723   53086 ssh_runner.go:195] Run: which crictl
	I0410 22:41:00.031462   53086 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:41:00.031530   53086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:41:00.070469   53086 cri.go:89] found id: "ad3b07623fa466fde13ebee2fa02cd44de73902c33b30e13dd80d3124724c780"
	I0410 22:41:00.070484   53086 cri.go:89] found id: ""
	I0410 22:41:00.070492   53086 logs.go:276] 1 containers: [ad3b07623fa466fde13ebee2fa02cd44de73902c33b30e13dd80d3124724c780]
	I0410 22:41:00.070549   53086 ssh_runner.go:195] Run: which crictl
	I0410 22:41:00.074883   53086 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:41:00.074947   53086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:41:00.123849   53086 cri.go:89] found id: "f3ac4ec3826744c66e06c8e3619f74dd754625d0f296d7dcb89ae67b35c68959"
	I0410 22:41:00.123860   53086 cri.go:89] found id: ""
	I0410 22:41:00.123867   53086 logs.go:276] 1 containers: [f3ac4ec3826744c66e06c8e3619f74dd754625d0f296d7dcb89ae67b35c68959]
	I0410 22:41:00.123908   53086 ssh_runner.go:195] Run: which crictl
	I0410 22:41:00.128726   53086 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:41:00.128777   53086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:41:00.173933   53086 cri.go:89] found id: "0f5bc9f17317dd7f3400e901c1e7689d65ae55c3efd64f1fcc4914a626b37444"
	I0410 22:41:00.173944   53086 cri.go:89] found id: ""
	I0410 22:41:00.173949   53086 logs.go:276] 1 containers: [0f5bc9f17317dd7f3400e901c1e7689d65ae55c3efd64f1fcc4914a626b37444]
	I0410 22:41:00.173993   53086 ssh_runner.go:195] Run: which crictl
	I0410 22:41:00.178415   53086 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:41:00.178459   53086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:41:00.221443   53086 cri.go:89] found id: "469ea2e69b4d70ac81e7fa12af6a8088d9f5c72759e8bd0a91027d688bbc8861"
	I0410 22:41:00.221457   53086 cri.go:89] found id: ""
	I0410 22:41:00.221465   53086 logs.go:276] 1 containers: [469ea2e69b4d70ac81e7fa12af6a8088d9f5c72759e8bd0a91027d688bbc8861]
	I0410 22:41:00.221519   53086 ssh_runner.go:195] Run: which crictl
	I0410 22:41:00.227658   53086 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:41:00.227712   53086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:41:00.276364   53086 cri.go:89] found id: ""
	I0410 22:41:00.276381   53086 logs.go:276] 0 containers: []
	W0410 22:41:00.276390   53086 logs.go:278] No container was found matching "kindnet"
	I0410 22:41:00.276407   53086 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0410 22:41:00.276468   53086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0410 22:41:00.314264   53086 cri.go:89] found id: "1752300ac4f2a2e40793a8958733d491dc01c88aedcfc0df7aec5395b825d57a"
	I0410 22:41:00.314283   53086 cri.go:89] found id: ""
	I0410 22:41:00.314291   53086 logs.go:276] 1 containers: [1752300ac4f2a2e40793a8958733d491dc01c88aedcfc0df7aec5395b825d57a]
	I0410 22:41:00.314346   53086 ssh_runner.go:195] Run: which crictl
	I0410 22:41:00.318940   53086 logs.go:123] Gathering logs for kubelet ...
	I0410 22:41:00.318954   53086 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:41:00.421003   53086 logs.go:123] Gathering logs for kube-proxy [0f5bc9f17317dd7f3400e901c1e7689d65ae55c3efd64f1fcc4914a626b37444] ...
	I0410 22:41:00.421016   53086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f5bc9f17317dd7f3400e901c1e7689d65ae55c3efd64f1fcc4914a626b37444"
	I0410 22:41:00.462731   53086 logs.go:123] Gathering logs for kube-controller-manager [469ea2e69b4d70ac81e7fa12af6a8088d9f5c72759e8bd0a91027d688bbc8861] ...
	I0410 22:41:00.462747   53086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 469ea2e69b4d70ac81e7fa12af6a8088d9f5c72759e8bd0a91027d688bbc8861"
	I0410 22:41:00.504613   53086 logs.go:123] Gathering logs for storage-provisioner [1752300ac4f2a2e40793a8958733d491dc01c88aedcfc0df7aec5395b825d57a] ...
	I0410 22:41:00.504629   53086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1752300ac4f2a2e40793a8958733d491dc01c88aedcfc0df7aec5395b825d57a"
	I0410 22:41:00.551700   53086 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:41:00.551715   53086 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:41:00.819203   53086 logs.go:123] Gathering logs for container status ...
	I0410 22:41:00.819225   53086 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:41:00.869058   53086 logs.go:123] Gathering logs for dmesg ...
	I0410 22:41:00.869074   53086 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:41:00.886250   53086 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:41:00.886274   53086 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:41:00.966626   53086 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:41:00.966653   53086 logs.go:123] Gathering logs for kube-apiserver [0770842f06e149a8200ec7842f5889a8e6d2fec0120a52b9ec6db6062b35c4fb] ...
	I0410 22:41:00.966677   53086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0770842f06e149a8200ec7842f5889a8e6d2fec0120a52b9ec6db6062b35c4fb"
	I0410 22:41:01.021234   53086 logs.go:123] Gathering logs for etcd [cc30c01d0a590683bb86a56f83d5a0dd50d0290d576b3416d874d7e92a77700b] ...
	I0410 22:41:01.021252   53086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc30c01d0a590683bb86a56f83d5a0dd50d0290d576b3416d874d7e92a77700b"
	I0410 22:41:01.073589   53086 logs.go:123] Gathering logs for coredns [ad3b07623fa466fde13ebee2fa02cd44de73902c33b30e13dd80d3124724c780] ...
	I0410 22:41:01.073605   53086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad3b07623fa466fde13ebee2fa02cd44de73902c33b30e13dd80d3124724c780"
	I0410 22:41:01.114775   53086 logs.go:123] Gathering logs for kube-scheduler [f3ac4ec3826744c66e06c8e3619f74dd754625d0f296d7dcb89ae67b35c68959] ...
	I0410 22:41:01.114796   53086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3ac4ec3826744c66e06c8e3619f74dd754625d0f296d7dcb89ae67b35c68959"
	I0410 22:41:01.930002   55708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 22:41:01.957071   55708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 22:41:01.985330   55708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0410 22:41:02.011612   55708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0410 22:41:02.038961   55708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 22:41:02.066953   55708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kubernetes-upgrade-407031/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0410 22:41:02.094993   55708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 22:41:02.121813   55708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 22:41:02.150616   55708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 22:41:02.182218   55708 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 22:41:02.211832   55708 ssh_runner.go:195] Run: openssl version
	I0410 22:41:02.217759   55708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 22:41:02.230592   55708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 22:41:02.235537   55708 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:41:02.235615   55708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 22:41:02.241572   55708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 22:41:02.252456   55708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 22:41:02.265585   55708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 22:41:02.270497   55708 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:41:02.270559   55708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 22:41:02.276388   55708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 22:41:02.289250   55708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 22:41:02.303613   55708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:41:02.308871   55708 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:41:02.308998   55708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:41:02.315409   55708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 22:41:02.328418   55708 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:41:02.333687   55708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0410 22:41:02.339958   55708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0410 22:41:02.346659   55708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0410 22:41:02.353193   55708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0410 22:41:02.359884   55708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0410 22:41:02.366069   55708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0410 22:41:02.372146   55708 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-407031 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.30.0-rc.1 ClusterName:kubernetes-upgrade-407031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:41:02.372237   55708 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 22:41:02.372276   55708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:41:02.419852   55708 cri.go:89] found id: "ea92b72f5751e613a8165f921c8788f6d1bdb2ef8c10e612e8f164a80e1a5b4b"
	I0410 22:41:02.419884   55708 cri.go:89] found id: "aaa32672ad77ae519c0ae34ee3d73255d6b59a09cb71244d32d251e808851678"
	I0410 22:41:02.419888   55708 cri.go:89] found id: "e6396e822ab4286cfd49d37382a344f8211f9015a1543a9b1988e0d5732b6d24"
	I0410 22:41:02.419892   55708 cri.go:89] found id: "fc81e6cef2cddfe80108e5a686e925d14dadd36118db34190a107402f5604913"
	I0410 22:41:02.419894   55708 cri.go:89] found id: ""
	I0410 22:41:02.419948   55708 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 10 22:41:11 kubernetes-upgrade-407031 crio[1885]: time="2024-04-10 22:41:11.339123022Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712788871339093409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121225,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b219518-40c2-45cb-b10d-f2f12a790587 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:41:11 kubernetes-upgrade-407031 crio[1885]: time="2024-04-10 22:41:11.340351287Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cca8477d-0e0b-413b-b620-320c1799ab64 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:41:11 kubernetes-upgrade-407031 crio[1885]: time="2024-04-10 22:41:11.340429841Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cca8477d-0e0b-413b-b620-320c1799ab64 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:41:11 kubernetes-upgrade-407031 crio[1885]: time="2024-04-10 22:41:11.340692337Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d96734a0306bb40820a8f192b2de88863e87f3f74509846acc5cb60761426a15,PodSandboxId:7011c0a4933e60b243fe48841c867d2bac2f3622d3f1c45152fd114c03d358d4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712788864586399307,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-407031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af04fa4a5cc775fd343c731905d12ea9,},Annotations:map[string]string{io.kubernetes.container.hash: 15ddeead,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d38b0fad1e17787925f914a2ed8e7b8e6ea9f6f52538b54ea345f289e5336e36,PodSandboxId:ce29ebb7027d92b684214b5226871f0a3dc8c6457d7e9d67f8cd8f5b1aaaacd7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b,State:CONTAINER_RUNNING,CreatedAt:1712788864548103682,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-407031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59c94ad348ae5502e8a5dd492162b3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c2c6ac3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:835769a666981b69fc434a8953ed52a2c29df6d6276c19a9cecf12c14e50d109,PodSandboxId:29386fe676aab7cc9250dc8729c20c6185ff8877a8eccc8807e680e162833bde,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090,State:CONTAINER_RUNNING,CreatedAt:1712788864568008554,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-407031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08815fa36c7c865cb7a4c955cebdadde,},Annotations:map[string]string{io.kubernetes.container.hash: 558a0b01,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:240206cafa4f297dcf1362240ab28b3ffd46ef37093ba6f0d1df0005a314fbfc,PodSandboxId:007022d14ab77a3e11cde5bf45179e66feb9dfe35a783aed65cfa8b3a48c7790,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,State:CONTAINER_RUNNING,CreatedAt:1712788864550165054,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-407031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a73b9fc5ec40808ddc5b023ad07532d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5636ab25,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea92b72f5751e613a8165f921c8788f6d1bdb2ef8c10e612e8f164a80e1a5b4b,PodSandboxId:4eabac6db2ca7d475d9aaf0fd46f84ed36cc1012387e5826419fa5de0b09beef,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712788859389164544,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-407031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af04fa4a5cc775fd343c731905d12ea9,},Annotations:map[string]string{io.kubernetes.container.hash: 15ddeead,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaa32672ad77ae519c0ae34ee3d73255d6b59a09cb71244d32d251e808851678,PodSandboxId:10b9393b008d998cc9156a5b23b27f4499169aa5bcfcb4020c2aec2cbde4a1d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090,State:CONTAINER_EXITED,CreatedAt:1712788859261111273,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-407031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08815fa36c7c865cb7a4c955cebdadde,},Annotations:map[string]string{io.kubernetes.container.hash: 558a0b01,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6396e822ab4286cfd49d37382a344f8211f9015a1543a9b1988e0d5732b6d24,PodSandboxId:81cefa795c1b0b4d0204699c5b22a3581d31e0f078319ea6d411575c7d3dc43c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b,State:CONTAINER_EXITED,CreatedAt:1712788859239618275,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-407031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59c94ad348ae5502e8a5dd492162b3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c2c6ac3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc81e6cef2cddfe80108e5a686e925d14dadd36118db34190a107402f5604913,PodSandboxId:9d0d934a71650461389e46e29e3563694b700137e8e1246b3d6a1caf245df491,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,State:CONTAINER_EXITED,CreatedAt:1712788859208693430,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-407031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a73b9fc5ec40808ddc5b023ad07532d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5636ab25,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cca8477d-0e0b-413b-b620-320c1799ab64 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:41:11 kubernetes-upgrade-407031 crio[1885]: time="2024-04-10 22:41:11.389901734Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=56e1f7e9-15a8-472b-8e2c-b3ff90581247 name=/runtime.v1.RuntimeService/Version
	Apr 10 22:41:11 kubernetes-upgrade-407031 crio[1885]: time="2024-04-10 22:41:11.390004408Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=56e1f7e9-15a8-472b-8e2c-b3ff90581247 name=/runtime.v1.RuntimeService/Version
	Apr 10 22:41:11 kubernetes-upgrade-407031 crio[1885]: time="2024-04-10 22:41:11.391310482Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0b0db19a-4aa7-4543-bfda-5eef9cffaca5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:41:11 kubernetes-upgrade-407031 crio[1885]: time="2024-04-10 22:41:11.391684187Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712788871391659224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121225,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0b0db19a-4aa7-4543-bfda-5eef9cffaca5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:41:11 kubernetes-upgrade-407031 crio[1885]: time="2024-04-10 22:41:11.392186368Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6c40587-d234-4934-943b-e0f00f2790a1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:41:11 kubernetes-upgrade-407031 crio[1885]: time="2024-04-10 22:41:11.392264295Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6c40587-d234-4934-943b-e0f00f2790a1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:41:11 kubernetes-upgrade-407031 crio[1885]: time="2024-04-10 22:41:11.392493478Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d96734a0306bb40820a8f192b2de88863e87f3f74509846acc5cb60761426a15,PodSandboxId:7011c0a4933e60b243fe48841c867d2bac2f3622d3f1c45152fd114c03d358d4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712788864586399307,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-407031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af04fa4a5cc775fd343c731905d12ea9,},Annotations:map[string]string{io.kubernetes.container.hash: 15ddeead,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d38b0fad1e17787925f914a2ed8e7b8e6ea9f6f52538b54ea345f289e5336e36,PodSandboxId:ce29ebb7027d92b684214b5226871f0a3dc8c6457d7e9d67f8cd8f5b1aaaacd7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b,State:CONTAINER_RUNNING,CreatedAt:1712788864548103682,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-407031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59c94ad348ae5502e8a5dd492162b3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c2c6ac3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:835769a666981b69fc434a8953ed52a2c29df6d6276c19a9cecf12c14e50d109,PodSandboxId:29386fe676aab7cc9250dc8729c20c6185ff8877a8eccc8807e680e162833bde,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090,State:CONTAINER_RUNNING,CreatedAt:1712788864568008554,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-407031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08815fa36c7c865cb7a4c955cebdadde,},Annotations:map[string]string{io.kubernetes.container.hash: 558a0b01,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:240206cafa4f297dcf1362240ab28b3ffd46ef37093ba6f0d1df0005a314fbfc,PodSandboxId:007022d14ab77a3e11cde5bf45179e66feb9dfe35a783aed65cfa8b3a48c7790,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,State:CONTAINER_RUNNING,CreatedAt:1712788864550165054,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-407031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a73b9fc5ec40808ddc5b023ad07532d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5636ab25,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea92b72f5751e613a8165f921c8788f6d1bdb2ef8c10e612e8f164a80e1a5b4b,PodSandboxId:4eabac6db2ca7d475d9aaf0fd46f84ed36cc1012387e5826419fa5de0b09beef,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712788859389164544,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-407031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af04fa4a5cc775fd343c731905d12ea9,},Annotations:map[string]string{io.kubernetes.container.hash: 15ddeead,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaa32672ad77ae519c0ae34ee3d73255d6b59a09cb71244d32d251e808851678,PodSandboxId:10b9393b008d998cc9156a5b23b27f4499169aa5bcfcb4020c2aec2cbde4a1d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090,State:CONTAINER_EXITED,CreatedAt:1712788859261111273,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-407031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08815fa36c7c865cb7a4c955cebdadde,},Annotations:map[string]string{io.kubernetes.container.hash: 558a0b01,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6396e822ab4286cfd49d37382a344f8211f9015a1543a9b1988e0d5732b6d24,PodSandboxId:81cefa795c1b0b4d0204699c5b22a3581d31e0f078319ea6d411575c7d3dc43c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b,State:CONTAINER_EXITED,CreatedAt:1712788859239618275,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-407031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59c94ad348ae5502e8a5dd492162b3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c2c6ac3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc81e6cef2cddfe80108e5a686e925d14dadd36118db34190a107402f5604913,PodSandboxId:9d0d934a71650461389e46e29e3563694b700137e8e1246b3d6a1caf245df491,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,State:CONTAINER_EXITED,CreatedAt:1712788859208693430,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-407031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a73b9fc5ec40808ddc5b023ad07532d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5636ab25,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6c40587-d234-4934-943b-e0f00f2790a1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:41:11 kubernetes-upgrade-407031 crio[1885]: time="2024-04-10 22:41:11.442053474Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3ccdd6c3-f484-42bc-84fe-f92977338260 name=/runtime.v1.RuntimeService/Version
	Apr 10 22:41:11 kubernetes-upgrade-407031 crio[1885]: time="2024-04-10 22:41:11.442155421Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3ccdd6c3-f484-42bc-84fe-f92977338260 name=/runtime.v1.RuntimeService/Version
	Apr 10 22:41:11 kubernetes-upgrade-407031 crio[1885]: time="2024-04-10 22:41:11.443774469Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a34ed71d-0cff-4cd6-955a-fc9e88fc7cab name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:41:11 kubernetes-upgrade-407031 crio[1885]: time="2024-04-10 22:41:11.444413988Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712788871444378602,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121225,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a34ed71d-0cff-4cd6-955a-fc9e88fc7cab name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:41:11 kubernetes-upgrade-407031 crio[1885]: time="2024-04-10 22:41:11.445405861Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=429c6b53-9d1f-4dd9-bc0c-3fc596818c2b name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:41:11 kubernetes-upgrade-407031 crio[1885]: time="2024-04-10 22:41:11.445560752Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=429c6b53-9d1f-4dd9-bc0c-3fc596818c2b name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:41:11 kubernetes-upgrade-407031 crio[1885]: time="2024-04-10 22:41:11.445989209Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d96734a0306bb40820a8f192b2de88863e87f3f74509846acc5cb60761426a15,PodSandboxId:7011c0a4933e60b243fe48841c867d2bac2f3622d3f1c45152fd114c03d358d4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712788864586399307,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-407031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af04fa4a5cc775fd343c731905d12ea9,},Annotations:map[string]string{io.kubernetes.container.hash: 15ddeead,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d38b0fad1e17787925f914a2ed8e7b8e6ea9f6f52538b54ea345f289e5336e36,PodSandboxId:ce29ebb7027d92b684214b5226871f0a3dc8c6457d7e9d67f8cd8f5b1aaaacd7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b,State:CONTAINER_RUNNING,CreatedAt:1712788864548103682,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-407031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59c94ad348ae5502e8a5dd492162b3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c2c6ac3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:835769a666981b69fc434a8953ed52a2c29df6d6276c19a9cecf12c14e50d109,PodSandboxId:29386fe676aab7cc9250dc8729c20c6185ff8877a8eccc8807e680e162833bde,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090,State:CONTAINER_RUNNING,CreatedAt:1712788864568008554,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-407031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08815fa36c7c865cb7a4c955cebdadde,},Annotations:map[string]string{io.kubernetes.container.hash: 558a0b01,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:240206cafa4f297dcf1362240ab28b3ffd46ef37093ba6f0d1df0005a314fbfc,PodSandboxId:007022d14ab77a3e11cde5bf45179e66feb9dfe35a783aed65cfa8b3a48c7790,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,State:CONTAINER_RUNNING,CreatedAt:1712788864550165054,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-407031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a73b9fc5ec40808ddc5b023ad07532d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5636ab25,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea92b72f5751e613a8165f921c8788f6d1bdb2ef8c10e612e8f164a80e1a5b4b,PodSandboxId:4eabac6db2ca7d475d9aaf0fd46f84ed36cc1012387e5826419fa5de0b09beef,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712788859389164544,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-407031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af04fa4a5cc775fd343c731905d12ea9,},Annotations:map[string]string{io.kubernetes.container.hash: 15ddeead,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaa32672ad77ae519c0ae34ee3d73255d6b59a09cb71244d32d251e808851678,PodSandboxId:10b9393b008d998cc9156a5b23b27f4499169aa5bcfcb4020c2aec2cbde4a1d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090,State:CONTAINER_EXITED,CreatedAt:1712788859261111273,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-407031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08815fa36c7c865cb7a4c955cebdadde,},Annotations:map[string]string{io.kubernetes.container.hash: 558a0b01,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6396e822ab4286cfd49d37382a344f8211f9015a1543a9b1988e0d5732b6d24,PodSandboxId:81cefa795c1b0b4d0204699c5b22a3581d31e0f078319ea6d411575c7d3dc43c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b,State:CONTAINER_EXITED,CreatedAt:1712788859239618275,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-407031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59c94ad348ae5502e8a5dd492162b3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c2c6ac3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc81e6cef2cddfe80108e5a686e925d14dadd36118db34190a107402f5604913,PodSandboxId:9d0d934a71650461389e46e29e3563694b700137e8e1246b3d6a1caf245df491,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,State:CONTAINER_EXITED,CreatedAt:1712788859208693430,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-407031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a73b9fc5ec40808ddc5b023ad07532d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5636ab25,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=429c6b53-9d1f-4dd9-bc0c-3fc596818c2b name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:41:11 kubernetes-upgrade-407031 crio[1885]: time="2024-04-10 22:41:11.491104568Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=14058144-46f6-41e2-a4b0-20e4ede12cc5 name=/runtime.v1.RuntimeService/Version
	Apr 10 22:41:11 kubernetes-upgrade-407031 crio[1885]: time="2024-04-10 22:41:11.491242352Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=14058144-46f6-41e2-a4b0-20e4ede12cc5 name=/runtime.v1.RuntimeService/Version
	Apr 10 22:41:11 kubernetes-upgrade-407031 crio[1885]: time="2024-04-10 22:41:11.493234718Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e62f7642-7b9d-427b-a796-31e079e56929 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:41:11 kubernetes-upgrade-407031 crio[1885]: time="2024-04-10 22:41:11.493890716Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712788871493803215,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121225,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e62f7642-7b9d-427b-a796-31e079e56929 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:41:11 kubernetes-upgrade-407031 crio[1885]: time="2024-04-10 22:41:11.494573252Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=12d003d6-b9e1-4fe8-8ffe-2988e894f868 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:41:11 kubernetes-upgrade-407031 crio[1885]: time="2024-04-10 22:41:11.494671228Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=12d003d6-b9e1-4fe8-8ffe-2988e894f868 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:41:11 kubernetes-upgrade-407031 crio[1885]: time="2024-04-10 22:41:11.495062819Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d96734a0306bb40820a8f192b2de88863e87f3f74509846acc5cb60761426a15,PodSandboxId:7011c0a4933e60b243fe48841c867d2bac2f3622d3f1c45152fd114c03d358d4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712788864586399307,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-407031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af04fa4a5cc775fd343c731905d12ea9,},Annotations:map[string]string{io.kubernetes.container.hash: 15ddeead,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d38b0fad1e17787925f914a2ed8e7b8e6ea9f6f52538b54ea345f289e5336e36,PodSandboxId:ce29ebb7027d92b684214b5226871f0a3dc8c6457d7e9d67f8cd8f5b1aaaacd7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b,State:CONTAINER_RUNNING,CreatedAt:1712788864548103682,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-407031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59c94ad348ae5502e8a5dd492162b3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c2c6ac3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:835769a666981b69fc434a8953ed52a2c29df6d6276c19a9cecf12c14e50d109,PodSandboxId:29386fe676aab7cc9250dc8729c20c6185ff8877a8eccc8807e680e162833bde,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090,State:CONTAINER_RUNNING,CreatedAt:1712788864568008554,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-407031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08815fa36c7c865cb7a4c955cebdadde,},Annotations:map[string]string{io.kubernetes.container.hash: 558a0b01,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:240206cafa4f297dcf1362240ab28b3ffd46ef37093ba6f0d1df0005a314fbfc,PodSandboxId:007022d14ab77a3e11cde5bf45179e66feb9dfe35a783aed65cfa8b3a48c7790,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,State:CONTAINER_RUNNING,CreatedAt:1712788864550165054,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-407031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a73b9fc5ec40808ddc5b023ad07532d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5636ab25,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea92b72f5751e613a8165f921c8788f6d1bdb2ef8c10e612e8f164a80e1a5b4b,PodSandboxId:4eabac6db2ca7d475d9aaf0fd46f84ed36cc1012387e5826419fa5de0b09beef,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712788859389164544,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-407031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af04fa4a5cc775fd343c731905d12ea9,},Annotations:map[string]string{io.kubernetes.container.hash: 15ddeead,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaa32672ad77ae519c0ae34ee3d73255d6b59a09cb71244d32d251e808851678,PodSandboxId:10b9393b008d998cc9156a5b23b27f4499169aa5bcfcb4020c2aec2cbde4a1d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090,State:CONTAINER_EXITED,CreatedAt:1712788859261111273,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-407031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08815fa36c7c865cb7a4c955cebdadde,},Annotations:map[string]string{io.kubernetes.container.hash: 558a0b01,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6396e822ab4286cfd49d37382a344f8211f9015a1543a9b1988e0d5732b6d24,PodSandboxId:81cefa795c1b0b4d0204699c5b22a3581d31e0f078319ea6d411575c7d3dc43c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b,State:CONTAINER_EXITED,CreatedAt:1712788859239618275,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-407031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59c94ad348ae5502e8a5dd492162b3a0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c2c6ac3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc81e6cef2cddfe80108e5a686e925d14dadd36118db34190a107402f5604913,PodSandboxId:9d0d934a71650461389e46e29e3563694b700137e8e1246b3d6a1caf245df491,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,State:CONTAINER_EXITED,CreatedAt:1712788859208693430,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-407031,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a73b9fc5ec40808ddc5b023ad07532d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5636ab25,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=12d003d6-b9e1-4fe8-8ffe-2988e894f868 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d96734a0306bb       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   6 seconds ago       Running             etcd                      2                   7011c0a4933e6       etcd-kubernetes-upgrade-407031
	835769a666981       577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090   7 seconds ago       Running             kube-controller-manager   2                   29386fe676aab       kube-controller-manager-kubernetes-upgrade-407031
	240206cafa4f2       bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895   7 seconds ago       Running             kube-apiserver            2                   007022d14ab77       kube-apiserver-kubernetes-upgrade-407031
	d38b0fad1e177       ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b   7 seconds ago       Running             kube-scheduler            2                   ce29ebb7027d9       kube-scheduler-kubernetes-upgrade-407031
	ea92b72f5751e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   12 seconds ago      Exited              etcd                      1                   4eabac6db2ca7       etcd-kubernetes-upgrade-407031
	aaa32672ad77a       577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090   12 seconds ago      Exited              kube-controller-manager   1                   10b9393b008d9       kube-controller-manager-kubernetes-upgrade-407031
	e6396e822ab42       ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b   12 seconds ago      Exited              kube-scheduler            1                   81cefa795c1b0       kube-scheduler-kubernetes-upgrade-407031
	fc81e6cef2cdd       bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895   12 seconds ago      Exited              kube-apiserver            1                   9d0d934a71650       kube-apiserver-kubernetes-upgrade-407031
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-407031
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-407031
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Apr 2024 22:40:51 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-407031
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Apr 2024 22:41:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Apr 2024 22:41:07 +0000   Wed, 10 Apr 2024 22:40:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Apr 2024 22:41:07 +0000   Wed, 10 Apr 2024 22:40:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Apr 2024 22:41:07 +0000   Wed, 10 Apr 2024 22:40:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Apr 2024 22:41:07 +0000   Wed, 10 Apr 2024 22:40:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.180
	  Hostname:    kubernetes-upgrade-407031
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a462bc6803d459cb38441b2d490f1b6
	  System UUID:                6a462bc6-803d-459c-b384-41b2d490f1b6
	  Boot ID:                    7ed8bab5-3d87-47ab-ac6d-ecb5c9ef4dbd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.1
	  Kube-Proxy Version:         v1.30.0-rc.1
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 kube-apiserver-kubernetes-upgrade-407031             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-407031    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17s
	  kube-system                 kube-scheduler-kubernetes-upgrade-407031             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                550m (27%!)(MISSING)  0 (0%!)(MISSING)
	  memory             0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 25s                kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  24s                kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  23s (x8 over 25s)  kubelet  Node kubernetes-upgrade-407031 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 25s)  kubelet  Node kubernetes-upgrade-407031 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 25s)  kubelet  Node kubernetes-upgrade-407031 status is now: NodeHasSufficientPID
	  Normal  Starting                 7s                 kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s (x8 over 7s)    kubelet  Node kubernetes-upgrade-407031 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s (x8 over 7s)    kubelet  Node kubernetes-upgrade-407031 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s (x7 over 7s)    kubelet  Node kubernetes-upgrade-407031 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7s                 kubelet  Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +2.813420] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.663422] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.253179] systemd-fstab-generator[562]: Ignoring "noauto" option for root device
	[  +0.061589] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072687] systemd-fstab-generator[574]: Ignoring "noauto" option for root device
	[  +0.167257] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.140697] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.290042] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +4.572657] systemd-fstab-generator[728]: Ignoring "noauto" option for root device
	[  +0.062852] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.771731] systemd-fstab-generator[850]: Ignoring "noauto" option for root device
	[  +9.161360] systemd-fstab-generator[1251]: Ignoring "noauto" option for root device
	[  +0.093292] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.685430] systemd-fstab-generator[1729]: Ignoring "noauto" option for root device
	[  +0.199932] systemd-fstab-generator[1777]: Ignoring "noauto" option for root device
	[  +0.259245] systemd-fstab-generator[1830]: Ignoring "noauto" option for root device
	[  +0.200055] systemd-fstab-generator[1842]: Ignoring "noauto" option for root device
	[  +0.382826] systemd-fstab-generator[1872]: Ignoring "noauto" option for root device
	[Apr10 22:41] kauditd_printk_skb: 198 callbacks suppressed
	[  +0.614888] systemd-fstab-generator[2206]: Ignoring "noauto" option for root device
	[  +2.020227] systemd-fstab-generator[2329]: Ignoring "noauto" option for root device
	[  +5.790215] systemd-fstab-generator[2611]: Ignoring "noauto" option for root device
	[  +0.096853] kauditd_printk_skb: 85 callbacks suppressed
	
	
	==> etcd [d96734a0306bb40820a8f192b2de88863e87f3f74509846acc5cb60761426a15] <==
	{"level":"info","ts":"2024-04-10T22:41:04.953932Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-10T22:41:04.954085Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-10T22:41:04.95775Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 switched to configuration voters=(808613133158692504)"}
	{"level":"info","ts":"2024-04-10T22:41:04.957954Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"5a7d3c553a64e690","local-member-id":"b38c55c42a3b698","added-peer-id":"b38c55c42a3b698","added-peer-peer-urls":["https://192.168.39.180:2380"]}
	{"level":"info","ts":"2024-04-10T22:41:04.95821Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5a7d3c553a64e690","local-member-id":"b38c55c42a3b698","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-10T22:41:04.960064Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-10T22:41:04.965117Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-10T22:41:04.965369Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b38c55c42a3b698","initial-advertise-peer-urls":["https://192.168.39.180:2380"],"listen-peer-urls":["https://192.168.39.180:2380"],"advertise-client-urls":["https://192.168.39.180:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.180:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-10T22:41:04.965439Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-10T22:41:04.965536Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.180:2380"}
	{"level":"info","ts":"2024-04-10T22:41:04.965561Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.180:2380"}
	{"level":"info","ts":"2024-04-10T22:41:06.122697Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-10T22:41:06.122817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-10T22:41:06.12294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 received MsgPreVoteResp from b38c55c42a3b698 at term 2"}
	{"level":"info","ts":"2024-04-10T22:41:06.122977Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 became candidate at term 3"}
	{"level":"info","ts":"2024-04-10T22:41:06.123002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 received MsgVoteResp from b38c55c42a3b698 at term 3"}
	{"level":"info","ts":"2024-04-10T22:41:06.123029Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 became leader at term 3"}
	{"level":"info","ts":"2024-04-10T22:41:06.123063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b38c55c42a3b698 elected leader b38c55c42a3b698 at term 3"}
	{"level":"info","ts":"2024-04-10T22:41:06.131344Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"b38c55c42a3b698","local-member-attributes":"{Name:kubernetes-upgrade-407031 ClientURLs:[https://192.168.39.180:2379]}","request-path":"/0/members/b38c55c42a3b698/attributes","cluster-id":"5a7d3c553a64e690","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-10T22:41:06.131437Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-10T22:41:06.133516Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.180:2379"}
	{"level":"info","ts":"2024-04-10T22:41:06.133804Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-10T22:41:06.134959Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-10T22:41:06.135008Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-10T22:41:06.136409Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [ea92b72f5751e613a8165f921c8788f6d1bdb2ef8c10e612e8f164a80e1a5b4b] <==
	{"level":"info","ts":"2024-04-10T22:41:00.080933Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"54.46353ms"}
	{"level":"info","ts":"2024-04-10T22:41:00.09557Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-04-10T22:41:00.172814Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"5a7d3c553a64e690","local-member-id":"b38c55c42a3b698","commit-index":305}
	{"level":"info","ts":"2024-04-10T22:41:00.173059Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 switched to configuration voters=()"}
	{"level":"info","ts":"2024-04-10T22:41:00.173238Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 became follower at term 2"}
	{"level":"info","ts":"2024-04-10T22:41:00.17326Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft b38c55c42a3b698 [peers: [], term: 2, commit: 305, applied: 0, lastindex: 305, lastterm: 2]"}
	{"level":"warn","ts":"2024-04-10T22:41:00.186216Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-04-10T22:41:00.219572Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":299}
	{"level":"info","ts":"2024-04-10T22:41:00.227916Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-04-10T22:41:00.237398Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"b38c55c42a3b698","timeout":"7s"}
	{"level":"info","ts":"2024-04-10T22:41:00.2377Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"b38c55c42a3b698"}
	{"level":"info","ts":"2024-04-10T22:41:00.237781Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"b38c55c42a3b698","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-04-10T22:41:00.245467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 switched to configuration voters=(808613133158692504)"}
	{"level":"info","ts":"2024-04-10T22:41:00.245612Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"5a7d3c553a64e690","local-member-id":"b38c55c42a3b698","added-peer-id":"b38c55c42a3b698","added-peer-peer-urls":["https://192.168.39.180:2380"]}
	{"level":"info","ts":"2024-04-10T22:41:00.245991Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5a7d3c553a64e690","local-member-id":"b38c55c42a3b698","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-10T22:41:00.246097Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-10T22:41:00.259694Z","caller":"etcdserver/server.go:744","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"b38c55c42a3b698","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-04-10T22:41:00.274398Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-10T22:41:00.270108Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-10T22:41:00.277077Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.180:2380"}
	{"level":"info","ts":"2024-04-10T22:41:00.281566Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.180:2380"}
	{"level":"info","ts":"2024-04-10T22:41:00.281925Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-10T22:41:00.282403Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-10T22:41:00.303952Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-10T22:41:00.282808Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b38c55c42a3b698","initial-advertise-peer-urls":["https://192.168.39.180:2380"],"listen-peer-urls":["https://192.168.39.180:2380"],"advertise-client-urls":["https://192.168.39.180:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.180:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	
	
	==> kernel <==
	 22:41:11 up 0 min,  0 users,  load average: 0.66, 0.17, 0.06
	Linux kubernetes-upgrade-407031 5.10.207 #1 SMP Wed Apr 10 14:57:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [240206cafa4f297dcf1362240ab28b3ffd46ef37093ba6f0d1df0005a314fbfc] <==
	I0410 22:41:07.572981       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0410 22:41:07.576040       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0410 22:41:07.635125       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0410 22:41:07.640050       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0410 22:41:07.640139       1 policy_source.go:224] refreshing policies
	I0410 22:41:07.640241       1 shared_informer.go:320] Caches are synced for configmaps
	I0410 22:41:07.640451       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0410 22:41:07.640487       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0410 22:41:07.640608       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0410 22:41:07.640667       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0410 22:41:07.643010       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0410 22:41:07.643175       1 aggregator.go:165] initial CRD sync complete...
	I0410 22:41:07.643212       1 autoregister_controller.go:141] Starting autoregister controller
	I0410 22:41:07.643235       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0410 22:41:07.643257       1 cache.go:39] Caches are synced for autoregister controller
	I0410 22:41:07.647542       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0410 22:41:07.651097       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0410 22:41:07.653005       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0410 22:41:07.682234       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0410 22:41:08.544134       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0410 22:41:09.282434       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0410 22:41:09.298473       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0410 22:41:09.331385       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0410 22:41:09.465956       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0410 22:41:09.473705       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [fc81e6cef2cddfe80108e5a686e925d14dadd36118db34190a107402f5604913] <==
	I0410 22:40:59.795749       1 options.go:221] external host was not specified, using 192.168.39.180
	I0410 22:40:59.798815       1 server.go:148] Version: v1.30.0-rc.1
	I0410 22:40:59.798943       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [835769a666981b69fc434a8953ed52a2c29df6d6276c19a9cecf12c14e50d109] <==
	I0410 22:41:10.204313       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0410 22:41:10.204322       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0410 22:41:10.359408       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0410 22:41:10.359456       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0410 22:41:10.359465       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0410 22:41:10.404478       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0410 22:41:10.404554       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0410 22:41:10.404565       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0410 22:41:10.505327       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0410 22:41:10.505448       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0410 22:41:10.505598       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0410 22:41:10.505619       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0410 22:41:10.559346       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0410 22:41:10.559574       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0410 22:41:10.559643       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0410 22:41:10.559903       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0410 22:41:10.560280       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0410 22:41:10.560003       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0410 22:41:10.560069       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0410 22:41:10.560653       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0410 22:41:10.560078       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0410 22:41:10.560116       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0410 22:41:10.560123       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0410 22:41:10.559978       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0410 22:41:10.561456       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	
	
	==> kube-controller-manager [aaa32672ad77ae519c0ae34ee3d73255d6b59a09cb71244d32d251e808851678] <==
	
	
	==> kube-scheduler [d38b0fad1e17787925f914a2ed8e7b8e6ea9f6f52538b54ea345f289e5336e36] <==
	I0410 22:41:05.898131       1 serving.go:380] Generated self-signed cert in-memory
	I0410 22:41:07.691574       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0-rc.1"
	I0410 22:41:07.694977       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0410 22:41:07.706783       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0410 22:41:07.709391       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0410 22:41:07.709429       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0410 22:41:07.710077       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0410 22:41:07.709455       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0410 22:41:07.711230       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0410 22:41:07.709478       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0410 22:41:07.717938       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0410 22:41:07.810288       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0410 22:41:07.813621       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0410 22:41:07.819205       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kube-scheduler [e6396e822ab4286cfd49d37382a344f8211f9015a1543a9b1988e0d5732b6d24] <==
	
	
	==> kubelet <==
	Apr 10 22:41:04 kubernetes-upgrade-407031 kubelet[2336]: I0410 22:41:04.323259    2336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/08815fa36c7c865cb7a4c955cebdadde-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-407031\" (UID: \"08815fa36c7c865cb7a4c955cebdadde\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-407031"
	Apr 10 22:41:04 kubernetes-upgrade-407031 kubelet[2336]: I0410 22:41:04.323320    2336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/08815fa36c7c865cb7a4c955cebdadde-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-407031\" (UID: \"08815fa36c7c865cb7a4c955cebdadde\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-407031"
	Apr 10 22:41:04 kubernetes-upgrade-407031 kubelet[2336]: I0410 22:41:04.323336    2336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/af04fa4a5cc775fd343c731905d12ea9-etcd-certs\") pod \"etcd-kubernetes-upgrade-407031\" (UID: \"af04fa4a5cc775fd343c731905d12ea9\") " pod="kube-system/etcd-kubernetes-upgrade-407031"
	Apr 10 22:41:04 kubernetes-upgrade-407031 kubelet[2336]: I0410 22:41:04.323359    2336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a73b9fc5ec40808ddc5b023ad07532d5-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-407031\" (UID: \"a73b9fc5ec40808ddc5b023ad07532d5\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-407031"
	Apr 10 22:41:04 kubernetes-upgrade-407031 kubelet[2336]: I0410 22:41:04.323381    2336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a73b9fc5ec40808ddc5b023ad07532d5-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-407031\" (UID: \"a73b9fc5ec40808ddc5b023ad07532d5\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-407031"
	Apr 10 22:41:04 kubernetes-upgrade-407031 kubelet[2336]: I0410 22:41:04.323395    2336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/08815fa36c7c865cb7a4c955cebdadde-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-407031\" (UID: \"08815fa36c7c865cb7a4c955cebdadde\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-407031"
	Apr 10 22:41:04 kubernetes-upgrade-407031 kubelet[2336]: I0410 22:41:04.323423    2336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/08815fa36c7c865cb7a4c955cebdadde-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-407031\" (UID: \"08815fa36c7c865cb7a4c955cebdadde\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-407031"
	Apr 10 22:41:04 kubernetes-upgrade-407031 kubelet[2336]: I0410 22:41:04.323440    2336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/08815fa36c7c865cb7a4c955cebdadde-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-407031\" (UID: \"08815fa36c7c865cb7a4c955cebdadde\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-407031"
	Apr 10 22:41:04 kubernetes-upgrade-407031 kubelet[2336]: I0410 22:41:04.323465    2336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/59c94ad348ae5502e8a5dd492162b3a0-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-407031\" (UID: \"59c94ad348ae5502e8a5dd492162b3a0\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-407031"
	Apr 10 22:41:04 kubernetes-upgrade-407031 kubelet[2336]: I0410 22:41:04.323480    2336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/af04fa4a5cc775fd343c731905d12ea9-etcd-data\") pod \"etcd-kubernetes-upgrade-407031\" (UID: \"af04fa4a5cc775fd343c731905d12ea9\") " pod="kube-system/etcd-kubernetes-upgrade-407031"
	Apr 10 22:41:04 kubernetes-upgrade-407031 kubelet[2336]: I0410 22:41:04.323505    2336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a73b9fc5ec40808ddc5b023ad07532d5-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-407031\" (UID: \"a73b9fc5ec40808ddc5b023ad07532d5\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-407031"
	Apr 10 22:41:04 kubernetes-upgrade-407031 kubelet[2336]: I0410 22:41:04.327107    2336 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-407031"
	Apr 10 22:41:04 kubernetes-upgrade-407031 kubelet[2336]: E0410 22:41:04.328043    2336 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.180:8443: connect: connection refused" node="kubernetes-upgrade-407031"
	Apr 10 22:41:04 kubernetes-upgrade-407031 kubelet[2336]: I0410 22:41:04.517062    2336 scope.go:117] "RemoveContainer" containerID="fc81e6cef2cddfe80108e5a686e925d14dadd36118db34190a107402f5604913"
	Apr 10 22:41:04 kubernetes-upgrade-407031 kubelet[2336]: I0410 22:41:04.518082    2336 scope.go:117] "RemoveContainer" containerID="aaa32672ad77ae519c0ae34ee3d73255d6b59a09cb71244d32d251e808851678"
	Apr 10 22:41:04 kubernetes-upgrade-407031 kubelet[2336]: I0410 22:41:04.518614    2336 scope.go:117] "RemoveContainer" containerID="e6396e822ab4286cfd49d37382a344f8211f9015a1543a9b1988e0d5732b6d24"
	Apr 10 22:41:04 kubernetes-upgrade-407031 kubelet[2336]: I0410 22:41:04.520159    2336 scope.go:117] "RemoveContainer" containerID="ea92b72f5751e613a8165f921c8788f6d1bdb2ef8c10e612e8f164a80e1a5b4b"
	Apr 10 22:41:04 kubernetes-upgrade-407031 kubelet[2336]: E0410 22:41:04.626998    2336 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-407031?timeout=10s\": dial tcp 192.168.39.180:8443: connect: connection refused" interval="800ms"
	Apr 10 22:41:04 kubernetes-upgrade-407031 kubelet[2336]: I0410 22:41:04.730063    2336 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-407031"
	Apr 10 22:41:04 kubernetes-upgrade-407031 kubelet[2336]: E0410 22:41:04.731195    2336 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.180:8443: connect: connection refused" node="kubernetes-upgrade-407031"
	Apr 10 22:41:05 kubernetes-upgrade-407031 kubelet[2336]: I0410 22:41:05.533130    2336 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-407031"
	Apr 10 22:41:07 kubernetes-upgrade-407031 kubelet[2336]: I0410 22:41:07.682539    2336 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-407031"
	Apr 10 22:41:07 kubernetes-upgrade-407031 kubelet[2336]: I0410 22:41:07.683021    2336 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-407031"
	Apr 10 22:41:08 kubernetes-upgrade-407031 kubelet[2336]: I0410 22:41:08.005212    2336 apiserver.go:52] "Watching apiserver"
	Apr 10 22:41:08 kubernetes-upgrade-407031 kubelet[2336]: I0410 22:41:08.022102    2336 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0410 22:41:10.986301   55907 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18610-5679/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-407031 -n kubernetes-upgrade-407031
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-407031 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-407031 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-407031 describe pod storage-provisioner: exit status 1 (61.332716ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-407031 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-407031" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-407031
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-407031: (1.098243764s)
--- FAIL: TestKubernetesUpgrade (345.35s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (52.79s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-262675 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0410 22:36:37.160572   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
E0410 22:36:54.112904   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
E0410 22:36:59.610526   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-262675 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.616066567s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-262675] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18610
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-262675" primary control-plane node in "pause-262675" cluster
	* Updating the running kvm2 "pause-262675" VM ...
	* Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-262675" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0410 22:36:25.192202   52815 out.go:291] Setting OutFile to fd 1 ...
	I0410 22:36:25.192489   52815 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:36:25.192502   52815 out.go:304] Setting ErrFile to fd 2...
	I0410 22:36:25.192506   52815 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:36:25.192701   52815 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 22:36:25.193285   52815 out.go:298] Setting JSON to false
	I0410 22:36:25.194277   52815 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4728,"bootTime":1712783858,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0410 22:36:25.194343   52815 start.go:139] virtualization: kvm guest
	I0410 22:36:25.196662   52815 out.go:177] * [pause-262675] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0410 22:36:25.198206   52815 out.go:177]   - MINIKUBE_LOCATION=18610
	I0410 22:36:25.199698   52815 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0410 22:36:25.198266   52815 notify.go:220] Checking for updates...
	I0410 22:36:25.202475   52815 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:36:25.203995   52815 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 22:36:25.205328   52815 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0410 22:36:25.206751   52815 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0410 22:36:25.208619   52815 config.go:182] Loaded profile config "pause-262675": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:36:25.209101   52815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:36:25.209173   52815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:36:25.224599   52815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42769
	I0410 22:36:25.224938   52815 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:36:25.225411   52815 main.go:141] libmachine: Using API Version  1
	I0410 22:36:25.225428   52815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:36:25.225830   52815 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:36:25.226031   52815 main.go:141] libmachine: (pause-262675) Calling .DriverName
	I0410 22:36:25.226348   52815 driver.go:392] Setting default libvirt URI to qemu:///system
	I0410 22:36:25.226694   52815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:36:25.226756   52815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:36:25.241057   52815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45409
	I0410 22:36:25.241522   52815 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:36:25.242087   52815 main.go:141] libmachine: Using API Version  1
	I0410 22:36:25.242110   52815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:36:25.242434   52815 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:36:25.242620   52815 main.go:141] libmachine: (pause-262675) Calling .DriverName
	I0410 22:36:25.279740   52815 out.go:177] * Using the kvm2 driver based on existing profile
	I0410 22:36:25.281209   52815 start.go:297] selected driver: kvm2
	I0410 22:36:25.281238   52815 start.go:901] validating driver "kvm2" against &{Name:pause-262675 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.29.3 ClusterName:pause-262675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.144 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:36:25.281379   52815 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0410 22:36:25.281709   52815 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 22:36:25.281776   52815 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18610-5679/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0410 22:36:25.297054   52815 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0410 22:36:25.297809   52815 cni.go:84] Creating CNI manager for ""
	I0410 22:36:25.297828   52815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:36:25.297898   52815 start.go:340] cluster config:
	{Name:pause-262675 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:pause-262675 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.144 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:36:25.298056   52815 iso.go:125] acquiring lock: {Name:mk5a268a8a4dbf02629446a098c1de60574dce10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 22:36:25.300173   52815 out.go:177] * Starting "pause-262675" primary control-plane node in "pause-262675" cluster
	I0410 22:36:25.301873   52815 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 22:36:25.301917   52815 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0410 22:36:25.301927   52815 cache.go:56] Caching tarball of preloaded images
	I0410 22:36:25.302025   52815 preload.go:173] Found /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0410 22:36:25.302040   52815 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0410 22:36:25.302260   52815 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/pause-262675/config.json ...
	I0410 22:36:25.302515   52815 start.go:360] acquireMachinesLock for pause-262675: {Name:mkcfcfa8b01b63726303cd37d33f01d452118635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0410 22:36:35.225478   52815 start.go:364] duration metric: took 9.922900016s to acquireMachinesLock for "pause-262675"
	I0410 22:36:35.225536   52815 start.go:96] Skipping create...Using existing machine configuration
	I0410 22:36:35.225580   52815 fix.go:54] fixHost starting: 
	I0410 22:36:35.226082   52815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:36:35.226144   52815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:36:35.244303   52815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45451
	I0410 22:36:35.244736   52815 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:36:35.245226   52815 main.go:141] libmachine: Using API Version  1
	I0410 22:36:35.245249   52815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:36:35.245635   52815 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:36:35.245802   52815 main.go:141] libmachine: (pause-262675) Calling .DriverName
	I0410 22:36:35.245943   52815 main.go:141] libmachine: (pause-262675) Calling .GetState
	I0410 22:36:35.248042   52815 fix.go:112] recreateIfNeeded on pause-262675: state=Running err=<nil>
	W0410 22:36:35.248071   52815 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 22:36:35.250297   52815 out.go:177] * Updating the running kvm2 "pause-262675" VM ...
	I0410 22:36:35.251636   52815 machine.go:94] provisionDockerMachine start ...
	I0410 22:36:35.251664   52815 main.go:141] libmachine: (pause-262675) Calling .DriverName
	I0410 22:36:35.251927   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHHostname
	I0410 22:36:35.254762   52815 main.go:141] libmachine: (pause-262675) DBG | domain pause-262675 has defined MAC address 52:54:00:ed:c3:26 in network mk-pause-262675
	I0410 22:36:35.255293   52815 main.go:141] libmachine: (pause-262675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:c3:26", ip: ""} in network mk-pause-262675: {Iface:virbr2 ExpiryTime:2024-04-10 23:35:01 +0000 UTC Type:0 Mac:52:54:00:ed:c3:26 Iaid: IPaddr:192.168.50.144 Prefix:24 Hostname:pause-262675 Clientid:01:52:54:00:ed:c3:26}
	I0410 22:36:35.255327   52815 main.go:141] libmachine: (pause-262675) DBG | domain pause-262675 has defined IP address 192.168.50.144 and MAC address 52:54:00:ed:c3:26 in network mk-pause-262675
	I0410 22:36:35.255514   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHPort
	I0410 22:36:35.255700   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHKeyPath
	I0410 22:36:35.255921   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHKeyPath
	I0410 22:36:35.256096   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHUsername
	I0410 22:36:35.256245   52815 main.go:141] libmachine: Using SSH client type: native
	I0410 22:36:35.256530   52815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.144 22 <nil> <nil>}
	I0410 22:36:35.256547   52815 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 22:36:35.371270   52815 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-262675
	
	I0410 22:36:35.371297   52815 main.go:141] libmachine: (pause-262675) Calling .GetMachineName
	I0410 22:36:35.371568   52815 buildroot.go:166] provisioning hostname "pause-262675"
	I0410 22:36:35.371603   52815 main.go:141] libmachine: (pause-262675) Calling .GetMachineName
	I0410 22:36:35.371788   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHHostname
	I0410 22:36:35.374768   52815 main.go:141] libmachine: (pause-262675) DBG | domain pause-262675 has defined MAC address 52:54:00:ed:c3:26 in network mk-pause-262675
	I0410 22:36:35.375268   52815 main.go:141] libmachine: (pause-262675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:c3:26", ip: ""} in network mk-pause-262675: {Iface:virbr2 ExpiryTime:2024-04-10 23:35:01 +0000 UTC Type:0 Mac:52:54:00:ed:c3:26 Iaid: IPaddr:192.168.50.144 Prefix:24 Hostname:pause-262675 Clientid:01:52:54:00:ed:c3:26}
	I0410 22:36:35.375304   52815 main.go:141] libmachine: (pause-262675) DBG | domain pause-262675 has defined IP address 192.168.50.144 and MAC address 52:54:00:ed:c3:26 in network mk-pause-262675
	I0410 22:36:35.375554   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHPort
	I0410 22:36:35.375740   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHKeyPath
	I0410 22:36:35.375904   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHKeyPath
	I0410 22:36:35.376202   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHUsername
	I0410 22:36:35.376382   52815 main.go:141] libmachine: Using SSH client type: native
	I0410 22:36:35.376591   52815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.144 22 <nil> <nil>}
	I0410 22:36:35.376606   52815 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-262675 && echo "pause-262675" | sudo tee /etc/hostname
	I0410 22:36:35.498238   52815 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-262675
	
	I0410 22:36:35.498265   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHHostname
	I0410 22:36:35.501085   52815 main.go:141] libmachine: (pause-262675) DBG | domain pause-262675 has defined MAC address 52:54:00:ed:c3:26 in network mk-pause-262675
	I0410 22:36:35.501514   52815 main.go:141] libmachine: (pause-262675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:c3:26", ip: ""} in network mk-pause-262675: {Iface:virbr2 ExpiryTime:2024-04-10 23:35:01 +0000 UTC Type:0 Mac:52:54:00:ed:c3:26 Iaid: IPaddr:192.168.50.144 Prefix:24 Hostname:pause-262675 Clientid:01:52:54:00:ed:c3:26}
	I0410 22:36:35.501542   52815 main.go:141] libmachine: (pause-262675) DBG | domain pause-262675 has defined IP address 192.168.50.144 and MAC address 52:54:00:ed:c3:26 in network mk-pause-262675
	I0410 22:36:35.501757   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHPort
	I0410 22:36:35.501928   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHKeyPath
	I0410 22:36:35.502135   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHKeyPath
	I0410 22:36:35.502330   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHUsername
	I0410 22:36:35.502516   52815 main.go:141] libmachine: Using SSH client type: native
	I0410 22:36:35.502738   52815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.144 22 <nil> <nil>}
	I0410 22:36:35.502767   52815 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-262675' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-262675/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-262675' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:36:35.609515   52815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:36:35.609545   52815 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:36:35.609597   52815 buildroot.go:174] setting up certificates
	I0410 22:36:35.609615   52815 provision.go:84] configureAuth start
	I0410 22:36:35.609634   52815 main.go:141] libmachine: (pause-262675) Calling .GetMachineName
	I0410 22:36:35.609910   52815 main.go:141] libmachine: (pause-262675) Calling .GetIP
	I0410 22:36:35.612888   52815 main.go:141] libmachine: (pause-262675) DBG | domain pause-262675 has defined MAC address 52:54:00:ed:c3:26 in network mk-pause-262675
	I0410 22:36:35.613384   52815 main.go:141] libmachine: (pause-262675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:c3:26", ip: ""} in network mk-pause-262675: {Iface:virbr2 ExpiryTime:2024-04-10 23:35:01 +0000 UTC Type:0 Mac:52:54:00:ed:c3:26 Iaid: IPaddr:192.168.50.144 Prefix:24 Hostname:pause-262675 Clientid:01:52:54:00:ed:c3:26}
	I0410 22:36:35.613450   52815 main.go:141] libmachine: (pause-262675) DBG | domain pause-262675 has defined IP address 192.168.50.144 and MAC address 52:54:00:ed:c3:26 in network mk-pause-262675
	I0410 22:36:35.613608   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHHostname
	I0410 22:36:35.616172   52815 main.go:141] libmachine: (pause-262675) DBG | domain pause-262675 has defined MAC address 52:54:00:ed:c3:26 in network mk-pause-262675
	I0410 22:36:35.616608   52815 main.go:141] libmachine: (pause-262675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:c3:26", ip: ""} in network mk-pause-262675: {Iface:virbr2 ExpiryTime:2024-04-10 23:35:01 +0000 UTC Type:0 Mac:52:54:00:ed:c3:26 Iaid: IPaddr:192.168.50.144 Prefix:24 Hostname:pause-262675 Clientid:01:52:54:00:ed:c3:26}
	I0410 22:36:35.616643   52815 main.go:141] libmachine: (pause-262675) DBG | domain pause-262675 has defined IP address 192.168.50.144 and MAC address 52:54:00:ed:c3:26 in network mk-pause-262675
	I0410 22:36:35.616809   52815 provision.go:143] copyHostCerts
	I0410 22:36:35.616885   52815 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:36:35.616906   52815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:36:35.616982   52815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:36:35.617103   52815 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:36:35.617114   52815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:36:35.617147   52815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:36:35.617240   52815 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:36:35.617252   52815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:36:35.617283   52815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:36:35.617498   52815 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.pause-262675 san=[127.0.0.1 192.168.50.144 localhost minikube pause-262675]
	I0410 22:36:35.833928   52815 provision.go:177] copyRemoteCerts
	I0410 22:36:35.834009   52815 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:36:35.834072   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHHostname
	I0410 22:36:35.837387   52815 main.go:141] libmachine: (pause-262675) DBG | domain pause-262675 has defined MAC address 52:54:00:ed:c3:26 in network mk-pause-262675
	I0410 22:36:35.837733   52815 main.go:141] libmachine: (pause-262675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:c3:26", ip: ""} in network mk-pause-262675: {Iface:virbr2 ExpiryTime:2024-04-10 23:35:01 +0000 UTC Type:0 Mac:52:54:00:ed:c3:26 Iaid: IPaddr:192.168.50.144 Prefix:24 Hostname:pause-262675 Clientid:01:52:54:00:ed:c3:26}
	I0410 22:36:35.837765   52815 main.go:141] libmachine: (pause-262675) DBG | domain pause-262675 has defined IP address 192.168.50.144 and MAC address 52:54:00:ed:c3:26 in network mk-pause-262675
	I0410 22:36:35.837932   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHPort
	I0410 22:36:35.838239   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHKeyPath
	I0410 22:36:35.838447   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHUsername
	I0410 22:36:35.838651   52815 sshutil.go:53] new ssh client: &{IP:192.168.50.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/pause-262675/id_rsa Username:docker}
	I0410 22:36:35.923518   52815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:36:35.951981   52815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0410 22:36:35.980007   52815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0410 22:36:36.008288   52815 provision.go:87] duration metric: took 398.656418ms to configureAuth
	I0410 22:36:36.008321   52815 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:36:36.008582   52815 config.go:182] Loaded profile config "pause-262675": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:36:36.008661   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHHostname
	I0410 22:36:36.011834   52815 main.go:141] libmachine: (pause-262675) DBG | domain pause-262675 has defined MAC address 52:54:00:ed:c3:26 in network mk-pause-262675
	I0410 22:36:36.012258   52815 main.go:141] libmachine: (pause-262675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:c3:26", ip: ""} in network mk-pause-262675: {Iface:virbr2 ExpiryTime:2024-04-10 23:35:01 +0000 UTC Type:0 Mac:52:54:00:ed:c3:26 Iaid: IPaddr:192.168.50.144 Prefix:24 Hostname:pause-262675 Clientid:01:52:54:00:ed:c3:26}
	I0410 22:36:36.012284   52815 main.go:141] libmachine: (pause-262675) DBG | domain pause-262675 has defined IP address 192.168.50.144 and MAC address 52:54:00:ed:c3:26 in network mk-pause-262675
	I0410 22:36:36.012559   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHPort
	I0410 22:36:36.012831   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHKeyPath
	I0410 22:36:36.013001   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHKeyPath
	I0410 22:36:36.013131   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHUsername
	I0410 22:36:36.013329   52815 main.go:141] libmachine: Using SSH client type: native
	I0410 22:36:36.013504   52815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.144 22 <nil> <nil>}
	I0410 22:36:36.013519   52815 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:36:42.191377   52815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:36:42.191399   52815 machine.go:97] duration metric: took 6.93974411s to provisionDockerMachine
	I0410 22:36:42.191409   52815 start.go:293] postStartSetup for "pause-262675" (driver="kvm2")
	I0410 22:36:42.191427   52815 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:36:42.191444   52815 main.go:141] libmachine: (pause-262675) Calling .DriverName
	I0410 22:36:42.191804   52815 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:36:42.191847   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHHostname
	I0410 22:36:42.195069   52815 main.go:141] libmachine: (pause-262675) DBG | domain pause-262675 has defined MAC address 52:54:00:ed:c3:26 in network mk-pause-262675
	I0410 22:36:42.195532   52815 main.go:141] libmachine: (pause-262675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:c3:26", ip: ""} in network mk-pause-262675: {Iface:virbr2 ExpiryTime:2024-04-10 23:35:01 +0000 UTC Type:0 Mac:52:54:00:ed:c3:26 Iaid: IPaddr:192.168.50.144 Prefix:24 Hostname:pause-262675 Clientid:01:52:54:00:ed:c3:26}
	I0410 22:36:42.195567   52815 main.go:141] libmachine: (pause-262675) DBG | domain pause-262675 has defined IP address 192.168.50.144 and MAC address 52:54:00:ed:c3:26 in network mk-pause-262675
	I0410 22:36:42.195772   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHPort
	I0410 22:36:42.196008   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHKeyPath
	I0410 22:36:42.196195   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHUsername
	I0410 22:36:42.196346   52815 sshutil.go:53] new ssh client: &{IP:192.168.50.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/pause-262675/id_rsa Username:docker}
	I0410 22:36:42.283708   52815 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:36:42.290238   52815 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:36:42.290275   52815 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:36:42.290348   52815 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:36:42.290437   52815 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:36:42.290576   52815 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:36:42.301959   52815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:36:42.333833   52815 start.go:296] duration metric: took 142.411826ms for postStartSetup
	I0410 22:36:42.333879   52815 fix.go:56] duration metric: took 7.108328689s for fixHost
	I0410 22:36:42.333909   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHHostname
	I0410 22:36:42.337286   52815 main.go:141] libmachine: (pause-262675) DBG | domain pause-262675 has defined MAC address 52:54:00:ed:c3:26 in network mk-pause-262675
	I0410 22:36:42.337655   52815 main.go:141] libmachine: (pause-262675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:c3:26", ip: ""} in network mk-pause-262675: {Iface:virbr2 ExpiryTime:2024-04-10 23:35:01 +0000 UTC Type:0 Mac:52:54:00:ed:c3:26 Iaid: IPaddr:192.168.50.144 Prefix:24 Hostname:pause-262675 Clientid:01:52:54:00:ed:c3:26}
	I0410 22:36:42.337688   52815 main.go:141] libmachine: (pause-262675) DBG | domain pause-262675 has defined IP address 192.168.50.144 and MAC address 52:54:00:ed:c3:26 in network mk-pause-262675
	I0410 22:36:42.337976   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHPort
	I0410 22:36:42.338192   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHKeyPath
	I0410 22:36:42.338392   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHKeyPath
	I0410 22:36:42.338635   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHUsername
	I0410 22:36:42.338837   52815 main.go:141] libmachine: Using SSH client type: native
	I0410 22:36:42.339024   52815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.144 22 <nil> <nil>}
	I0410 22:36:42.339037   52815 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0410 22:36:42.445592   52815 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712788602.435753182
	
	I0410 22:36:42.445620   52815 fix.go:216] guest clock: 1712788602.435753182
	I0410 22:36:42.445630   52815 fix.go:229] Guest: 2024-04-10 22:36:42.435753182 +0000 UTC Remote: 2024-04-10 22:36:42.333884348 +0000 UTC m=+17.192110010 (delta=101.868834ms)
	I0410 22:36:42.445688   52815 fix.go:200] guest clock delta is within tolerance: 101.868834ms
	I0410 22:36:42.445698   52815 start.go:83] releasing machines lock for "pause-262675", held for 7.220184563s
	I0410 22:36:42.445731   52815 main.go:141] libmachine: (pause-262675) Calling .DriverName
	I0410 22:36:42.446040   52815 main.go:141] libmachine: (pause-262675) Calling .GetIP
	I0410 22:36:42.449427   52815 main.go:141] libmachine: (pause-262675) DBG | domain pause-262675 has defined MAC address 52:54:00:ed:c3:26 in network mk-pause-262675
	I0410 22:36:42.449942   52815 main.go:141] libmachine: (pause-262675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:c3:26", ip: ""} in network mk-pause-262675: {Iface:virbr2 ExpiryTime:2024-04-10 23:35:01 +0000 UTC Type:0 Mac:52:54:00:ed:c3:26 Iaid: IPaddr:192.168.50.144 Prefix:24 Hostname:pause-262675 Clientid:01:52:54:00:ed:c3:26}
	I0410 22:36:42.449989   52815 main.go:141] libmachine: (pause-262675) DBG | domain pause-262675 has defined IP address 192.168.50.144 and MAC address 52:54:00:ed:c3:26 in network mk-pause-262675
	I0410 22:36:42.450149   52815 main.go:141] libmachine: (pause-262675) Calling .DriverName
	I0410 22:36:42.450727   52815 main.go:141] libmachine: (pause-262675) Calling .DriverName
	I0410 22:36:42.450941   52815 main.go:141] libmachine: (pause-262675) Calling .DriverName
	I0410 22:36:42.451072   52815 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:36:42.451135   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHHostname
	I0410 22:36:42.451159   52815 ssh_runner.go:195] Run: cat /version.json
	I0410 22:36:42.451184   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHHostname
	I0410 22:36:42.453884   52815 main.go:141] libmachine: (pause-262675) DBG | domain pause-262675 has defined MAC address 52:54:00:ed:c3:26 in network mk-pause-262675
	I0410 22:36:42.454000   52815 main.go:141] libmachine: (pause-262675) DBG | domain pause-262675 has defined MAC address 52:54:00:ed:c3:26 in network mk-pause-262675
	I0410 22:36:42.454312   52815 main.go:141] libmachine: (pause-262675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:c3:26", ip: ""} in network mk-pause-262675: {Iface:virbr2 ExpiryTime:2024-04-10 23:35:01 +0000 UTC Type:0 Mac:52:54:00:ed:c3:26 Iaid: IPaddr:192.168.50.144 Prefix:24 Hostname:pause-262675 Clientid:01:52:54:00:ed:c3:26}
	I0410 22:36:42.454336   52815 main.go:141] libmachine: (pause-262675) DBG | domain pause-262675 has defined IP address 192.168.50.144 and MAC address 52:54:00:ed:c3:26 in network mk-pause-262675
	I0410 22:36:42.454367   52815 main.go:141] libmachine: (pause-262675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:c3:26", ip: ""} in network mk-pause-262675: {Iface:virbr2 ExpiryTime:2024-04-10 23:35:01 +0000 UTC Type:0 Mac:52:54:00:ed:c3:26 Iaid: IPaddr:192.168.50.144 Prefix:24 Hostname:pause-262675 Clientid:01:52:54:00:ed:c3:26}
	I0410 22:36:42.454384   52815 main.go:141] libmachine: (pause-262675) DBG | domain pause-262675 has defined IP address 192.168.50.144 and MAC address 52:54:00:ed:c3:26 in network mk-pause-262675
	I0410 22:36:42.454544   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHPort
	I0410 22:36:42.454654   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHPort
	I0410 22:36:42.454732   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHKeyPath
	I0410 22:36:42.454842   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHKeyPath
	I0410 22:36:42.454916   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHUsername
	I0410 22:36:42.454995   52815 main.go:141] libmachine: (pause-262675) Calling .GetSSHUsername
	I0410 22:36:42.455178   52815 sshutil.go:53] new ssh client: &{IP:192.168.50.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/pause-262675/id_rsa Username:docker}
	I0410 22:36:42.455182   52815 sshutil.go:53] new ssh client: &{IP:192.168.50.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/pause-262675/id_rsa Username:docker}
	I0410 22:36:42.568466   52815 ssh_runner.go:195] Run: systemctl --version
	I0410 22:36:42.575390   52815 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:36:42.737974   52815 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 22:36:42.746975   52815 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:36:42.747056   52815 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:36:42.760254   52815 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0410 22:36:42.760291   52815 start.go:494] detecting cgroup driver to use...
	I0410 22:36:42.760366   52815 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:36:42.783692   52815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:36:42.801047   52815 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:36:42.801116   52815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:36:42.820361   52815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:36:42.839628   52815 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:36:43.016527   52815 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:36:43.172254   52815 docker.go:233] disabling docker service ...
	I0410 22:36:43.172357   52815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:36:43.193652   52815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:36:43.210680   52815 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:36:43.364640   52815 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:36:43.513600   52815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:36:43.528305   52815 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:36:43.553894   52815 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0410 22:36:43.553978   52815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:36:43.566947   52815 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:36:43.567041   52815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:36:43.579556   52815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:36:43.591123   52815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:36:43.602192   52815 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:36:43.644893   52815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:36:43.706855   52815 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:36:43.772272   52815 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:36:43.892629   52815 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:36:44.002106   52815 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:36:44.072241   52815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:36:44.449577   52815 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:36:45.030848   52815 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 22:36:45.030934   52815 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 22:36:45.037009   52815 start.go:562] Will wait 60s for crictl version
	I0410 22:36:45.037072   52815 ssh_runner.go:195] Run: which crictl
	I0410 22:36:45.041297   52815 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 22:36:45.081389   52815 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 22:36:45.081502   52815 ssh_runner.go:195] Run: crio --version
	I0410 22:36:45.120436   52815 ssh_runner.go:195] Run: crio --version
	I0410 22:36:45.159210   52815 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0410 22:36:45.160525   52815 main.go:141] libmachine: (pause-262675) Calling .GetIP
	I0410 22:36:45.163235   52815 main.go:141] libmachine: (pause-262675) DBG | domain pause-262675 has defined MAC address 52:54:00:ed:c3:26 in network mk-pause-262675
	I0410 22:36:45.163588   52815 main.go:141] libmachine: (pause-262675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:c3:26", ip: ""} in network mk-pause-262675: {Iface:virbr2 ExpiryTime:2024-04-10 23:35:01 +0000 UTC Type:0 Mac:52:54:00:ed:c3:26 Iaid: IPaddr:192.168.50.144 Prefix:24 Hostname:pause-262675 Clientid:01:52:54:00:ed:c3:26}
	I0410 22:36:45.163616   52815 main.go:141] libmachine: (pause-262675) DBG | domain pause-262675 has defined IP address 192.168.50.144 and MAC address 52:54:00:ed:c3:26 in network mk-pause-262675
	I0410 22:36:45.163786   52815 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0410 22:36:45.169144   52815 kubeadm.go:877] updating cluster {Name:pause-262675 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:pause-262675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.144 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 22:36:45.169264   52815 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 22:36:45.169307   52815 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:36:45.222101   52815 crio.go:514] all images are preloaded for cri-o runtime.
	I0410 22:36:45.222123   52815 crio.go:433] Images already preloaded, skipping extraction
	I0410 22:36:45.222177   52815 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:36:45.266802   52815 crio.go:514] all images are preloaded for cri-o runtime.
	I0410 22:36:45.266823   52815 cache_images.go:84] Images are preloaded, skipping loading
	I0410 22:36:45.266830   52815 kubeadm.go:928] updating node { 192.168.50.144 8443 v1.29.3 crio true true} ...
	I0410 22:36:45.266941   52815 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-262675 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:pause-262675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 22:36:45.267020   52815 ssh_runner.go:195] Run: crio config
	I0410 22:36:45.322394   52815 cni.go:84] Creating CNI manager for ""
	I0410 22:36:45.322432   52815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:36:45.322447   52815 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 22:36:45.322482   52815 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.144 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-262675 NodeName:pause-262675 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0410 22:36:45.322693   52815 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-262675"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 22:36:45.322770   52815 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0410 22:36:45.334376   52815 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 22:36:45.334470   52815 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 22:36:45.345768   52815 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0410 22:36:45.367099   52815 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0410 22:36:45.389558   52815 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0410 22:36:45.412354   52815 ssh_runner.go:195] Run: grep 192.168.50.144	control-plane.minikube.internal$ /etc/hosts
	I0410 22:36:45.416774   52815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:36:45.559192   52815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:36:45.575781   52815 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/pause-262675 for IP: 192.168.50.144
	I0410 22:36:45.575805   52815 certs.go:194] generating shared ca certs ...
	I0410 22:36:45.575820   52815 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:36:45.575984   52815 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 22:36:45.576056   52815 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 22:36:45.576071   52815 certs.go:256] generating profile certs ...
	I0410 22:36:45.576168   52815 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/pause-262675/client.key
	I0410 22:36:45.576246   52815 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/pause-262675/apiserver.key.dfc5a52f
	I0410 22:36:45.576305   52815 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/pause-262675/proxy-client.key
	I0410 22:36:45.576480   52815 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 22:36:45.576521   52815 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 22:36:45.576531   52815 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 22:36:45.576571   52815 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 22:36:45.576609   52815 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 22:36:45.576640   52815 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 22:36:45.576699   52815 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:36:45.577518   52815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 22:36:45.606639   52815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 22:36:45.637110   52815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 22:36:45.668432   52815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 22:36:45.705344   52815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/pause-262675/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0410 22:36:45.817780   52815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/pause-262675/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0410 22:36:45.854586   52815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/pause-262675/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 22:36:46.105314   52815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/pause-262675/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0410 22:36:46.186480   52815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 22:36:46.235339   52815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 22:36:46.300990   52815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 22:36:46.336307   52815 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 22:36:46.357648   52815 ssh_runner.go:195] Run: openssl version
	I0410 22:36:46.367654   52815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 22:36:46.380428   52815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:36:46.385645   52815 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:36:46.385708   52815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:36:46.391973   52815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 22:36:46.403098   52815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 22:36:46.416123   52815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 22:36:46.420948   52815 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:36:46.420996   52815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 22:36:46.426825   52815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 22:36:46.436849   52815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 22:36:46.447804   52815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 22:36:46.452570   52815 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:36:46.452635   52815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 22:36:46.458598   52815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 22:36:46.468822   52815 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:36:46.473639   52815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0410 22:36:46.479535   52815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0410 22:36:46.486096   52815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0410 22:36:46.494724   52815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0410 22:36:46.502255   52815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0410 22:36:46.512650   52815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0410 22:36:46.527587   52815 kubeadm.go:391] StartCluster: {Name:pause-262675 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:pause-262675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.144 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:36:46.527737   52815 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 22:36:46.527798   52815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:36:46.587749   52815 cri.go:89] found id: "d859f4900e3c62aaeb3e33febb019653b89218111578d09f1acc555d215905ee"
	I0410 22:36:46.587773   52815 cri.go:89] found id: "55b7d5868c6fde665f9fae9d132c63b6f5753ee0ea0360723096c2fd8273c5e1"
	I0410 22:36:46.587780   52815 cri.go:89] found id: "0d5d2920fb5e11e0bbd6eb4650d80e25e82853103b66f052f118117af38bb641"
	I0410 22:36:46.587784   52815 cri.go:89] found id: "7919dde648e62a3b6cfbaa50cf895abdaaef02d4fb09b084593d54b66a62c43c"
	I0410 22:36:46.587788   52815 cri.go:89] found id: "4ec4d9a0df94625ef5e7a4bd667bf67d19db4d6651500972577fa4eb5a3cb3ea"
	I0410 22:36:46.587792   52815 cri.go:89] found id: "dae010ac08bc1411c206e818b06d0d211b7506c5fef244d38698f4920531d794"
	I0410 22:36:46.587796   52815 cri.go:89] found id: "b824a7059497742d446ad42d8923a2a1891ed7433ba4b74ea538276a8db2e08a"
	I0410 22:36:46.587800   52815 cri.go:89] found id: ""
	I0410 22:36:46.587852   52815 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-262675 -n pause-262675
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-262675 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-262675 logs -n 25: (1.453009934s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| Command |                Args                |          Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p cilium-688825 sudo find         | cilium-688825             | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:32 UTC |                     |
	|         | /etc/crio -type f -exec sh -c      |                           |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;               |                           |         |                |                     |                     |
	| ssh     | -p cilium-688825 sudo crio         | cilium-688825             | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:32 UTC |                     |
	|         | config                             |                           |         |                |                     |                     |
	| delete  | -p cilium-688825                   | cilium-688825             | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:32 UTC | 10 Apr 24 22:32 UTC |
	| start   | -p cert-expiration-464519          | cert-expiration-464519    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:32 UTC | 10 Apr 24 22:34 UTC |
	|         | --memory=2048                      |                           |         |                |                     |                     |
	|         | --cert-expiration=3m               |                           |         |                |                     |                     |
	|         | --driver=kvm2                      |                           |         |                |                     |                     |
	|         | --container-runtime=crio           |                           |         |                |                     |                     |
	| start   | -p NoKubernetes-857710             | NoKubernetes-857710       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:33 UTC | 10 Apr 24 22:33 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |                |                     |                     |
	|         | --container-runtime=crio           |                           |         |                |                     |                     |
	| delete  | -p offline-crio-874231             | offline-crio-874231       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:33 UTC | 10 Apr 24 22:33 UTC |
	| start   | -p force-systemd-flag-738205       | force-systemd-flag-738205 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:33 UTC | 10 Apr 24 22:34 UTC |
	|         | --memory=2048 --force-systemd      |                           |         |                |                     |                     |
	|         | --alsologtostderr                  |                           |         |                |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |                |                     |                     |
	|         | --container-runtime=crio           |                           |         |                |                     |                     |
	| start   | -p running-upgrade-869202          | running-upgrade-869202    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:33 UTC | 10 Apr 24 22:35 UTC |
	|         | --memory=2200                      |                           |         |                |                     |                     |
	|         | --alsologtostderr                  |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |                |                     |                     |
	|         | --container-runtime=crio           |                           |         |                |                     |                     |
	| delete  | -p NoKubernetes-857710             | NoKubernetes-857710       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:33 UTC | 10 Apr 24 22:33 UTC |
	| start   | -p NoKubernetes-857710             | NoKubernetes-857710       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:33 UTC | 10 Apr 24 22:34 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |                |                     |                     |
	|         | --container-runtime=crio           |                           |         |                |                     |                     |
	| ssh     | force-systemd-flag-738205 ssh cat  | force-systemd-flag-738205 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:34 UTC | 10 Apr 24 22:34 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf |                           |         |                |                     |                     |
	| delete  | -p force-systemd-flag-738205       | force-systemd-flag-738205 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:34 UTC | 10 Apr 24 22:34 UTC |
	| start   | -p pause-262675 --memory=2048      | pause-262675              | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:34 UTC | 10 Apr 24 22:36 UTC |
	|         | --install-addons=false             |                           |         |                |                     |                     |
	|         | --wait=all --driver=kvm2           |                           |         |                |                     |                     |
	|         | --container-runtime=crio           |                           |         |                |                     |                     |
	| ssh     | -p NoKubernetes-857710 sudo        | NoKubernetes-857710       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:34 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |                |                     |                     |
	|         | service kubelet                    |                           |         |                |                     |                     |
	| stop    | -p NoKubernetes-857710             | NoKubernetes-857710       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:35 UTC | 10 Apr 24 22:35 UTC |
	| start   | -p NoKubernetes-857710             | NoKubernetes-857710       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:35 UTC | 10 Apr 24 22:35 UTC |
	|         | --driver=kvm2                      |                           |         |                |                     |                     |
	|         | --container-runtime=crio           |                           |         |                |                     |                     |
	| delete  | -p running-upgrade-869202          | running-upgrade-869202    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:35 UTC | 10 Apr 24 22:35 UTC |
	| start   | -p kubernetes-upgrade-407031       | kubernetes-upgrade-407031 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:35 UTC |                     |
	|         | --memory=2200                      |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0       |                           |         |                |                     |                     |
	|         | --alsologtostderr                  |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |                |                     |                     |
	|         | --container-runtime=crio           |                           |         |                |                     |                     |
	| ssh     | -p NoKubernetes-857710 sudo        | NoKubernetes-857710       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:35 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |                |                     |                     |
	|         | service kubelet                    |                           |         |                |                     |                     |
	| delete  | -p NoKubernetes-857710             | NoKubernetes-857710       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:35 UTC | 10 Apr 24 22:35 UTC |
	| start   | -p stopped-upgrade-546741          | minikube                  | jenkins | v1.26.0        | 10 Apr 24 22:35 UTC | 10 Apr 24 22:37 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |                |                     |                     |
	|         |  --container-runtime=crio          |                           |         |                |                     |                     |
	| start   | -p pause-262675                    | pause-262675              | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:36 UTC | 10 Apr 24 22:37 UTC |
	|         | --alsologtostderr                  |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |                |                     |                     |
	|         | --container-runtime=crio           |                           |         |                |                     |                     |
	| start   | -p cert-expiration-464519          | cert-expiration-464519    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:37 UTC |                     |
	|         | --memory=2048                      |                           |         |                |                     |                     |
	|         | --cert-expiration=8760h            |                           |         |                |                     |                     |
	|         | --driver=kvm2                      |                           |         |                |                     |                     |
	|         | --container-runtime=crio           |                           |         |                |                     |                     |
	| stop    | stopped-upgrade-546741 stop        | minikube                  | jenkins | v1.26.0        | 10 Apr 24 22:37 UTC | 10 Apr 24 22:37 UTC |
	| start   | -p stopped-upgrade-546741          | stopped-upgrade-546741    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:37 UTC |                     |
	|         | --memory=2200                      |                           |         |                |                     |                     |
	|         | --alsologtostderr                  |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |                |                     |                     |
	|         | --container-runtime=crio           |                           |         |                |                     |                     |
	|---------|------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/10 22:37:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0410 22:37:05.424187   53176 out.go:291] Setting OutFile to fd 1 ...
	I0410 22:37:05.424479   53176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:37:05.424490   53176 out.go:304] Setting ErrFile to fd 2...
	I0410 22:37:05.424494   53176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:37:05.425110   53176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 22:37:05.426266   53176 out.go:298] Setting JSON to false
	I0410 22:37:05.427325   53176 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4768,"bootTime":1712783858,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0410 22:37:05.427402   53176 start.go:139] virtualization: kvm guest
	I0410 22:37:05.429326   53176 out.go:177] * [stopped-upgrade-546741] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0410 22:37:05.430940   53176 notify.go:220] Checking for updates...
	I0410 22:37:05.430962   53176 out.go:177]   - MINIKUBE_LOCATION=18610
	I0410 22:37:05.432391   53176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0410 22:37:05.433815   53176 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:37:05.435217   53176 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 22:37:05.436656   53176 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0410 22:37:05.438334   53176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0410 22:37:05.440087   53176 config.go:182] Loaded profile config "stopped-upgrade-546741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0410 22:37:05.440601   53176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:37:05.440652   53176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:37:05.456288   53176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40899
	I0410 22:37:05.456777   53176 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:37:05.457392   53176 main.go:141] libmachine: Using API Version  1
	I0410 22:37:05.457414   53176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:37:05.457784   53176 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:37:05.458025   53176 main.go:141] libmachine: (stopped-upgrade-546741) Calling .DriverName
	I0410 22:37:05.460094   53176 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0410 22:37:05.461611   53176 driver.go:392] Setting default libvirt URI to qemu:///system
	I0410 22:37:05.461971   53176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:37:05.462020   53176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:37:05.477114   53176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36895
	I0410 22:37:05.477582   53176 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:37:05.478048   53176 main.go:141] libmachine: Using API Version  1
	I0410 22:37:05.478064   53176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:37:05.478364   53176 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:37:05.478604   53176 main.go:141] libmachine: (stopped-upgrade-546741) Calling .DriverName
	I0410 22:37:05.514360   53176 out.go:177] * Using the kvm2 driver based on existing profile
	I0410 22:37:05.515629   53176 start.go:297] selected driver: kvm2
	I0410 22:37:05.515643   53176 start.go:901] validating driver "kvm2" against &{Name:stopped-upgrade-546741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-546
741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.228 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0410 22:37:05.515776   53176 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0410 22:37:05.516787   53176 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 22:37:05.516879   53176 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18610-5679/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0410 22:37:05.531621   53176 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0410 22:37:05.531968   53176 cni.go:84] Creating CNI manager for ""
	I0410 22:37:05.531983   53176 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:37:05.532025   53176 start.go:340] cluster config:
	{Name:stopped-upgrade-546741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-546741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.228 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0410 22:37:05.532129   53176 iso.go:125] acquiring lock: {Name:mk5a268a8a4dbf02629446a098c1de60574dce10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 22:37:05.533944   53176 out.go:177] * Starting "stopped-upgrade-546741" primary control-plane node in "stopped-upgrade-546741" cluster
	I0410 22:37:03.429103   53086 machine.go:94] provisionDockerMachine start ...
	I0410 22:37:03.429116   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .DriverName
	I0410 22:37:03.429355   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHHostname
	I0410 22:37:03.432054   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:03.432689   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:41:37", ip: ""} in network mk-cert-expiration-464519: {Iface:virbr4 ExpiryTime:2024-04-10 23:33:30 +0000 UTC Type:0 Mac:52:54:00:50:41:37 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:cert-expiration-464519 Clientid:01:52:54:00:50:41:37}
	I0410 22:37:03.432705   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined IP address 192.168.72.34 and MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:03.432883   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHPort
	I0410 22:37:03.433074   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHKeyPath
	I0410 22:37:03.433250   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHKeyPath
	I0410 22:37:03.433359   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHUsername
	I0410 22:37:03.433567   53086 main.go:141] libmachine: Using SSH client type: native
	I0410 22:37:03.433747   53086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.34 22 <nil> <nil>}
	I0410 22:37:03.433752   53086 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 22:37:03.558073   53086 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-464519
	
	I0410 22:37:03.558090   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetMachineName
	I0410 22:37:03.558366   53086 buildroot.go:166] provisioning hostname "cert-expiration-464519"
	I0410 22:37:03.558409   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetMachineName
	I0410 22:37:03.558610   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHHostname
	I0410 22:37:03.561669   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:03.562130   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:41:37", ip: ""} in network mk-cert-expiration-464519: {Iface:virbr4 ExpiryTime:2024-04-10 23:33:30 +0000 UTC Type:0 Mac:52:54:00:50:41:37 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:cert-expiration-464519 Clientid:01:52:54:00:50:41:37}
	I0410 22:37:03.562164   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined IP address 192.168.72.34 and MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:03.562348   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHPort
	I0410 22:37:03.562528   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHKeyPath
	I0410 22:37:03.562712   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHKeyPath
	I0410 22:37:03.562856   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHUsername
	I0410 22:37:03.562989   53086 main.go:141] libmachine: Using SSH client type: native
	I0410 22:37:03.563133   53086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.34 22 <nil> <nil>}
	I0410 22:37:03.563139   53086 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-464519 && echo "cert-expiration-464519" | sudo tee /etc/hostname
	I0410 22:37:03.711018   53086 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-464519
	
	I0410 22:37:03.711051   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHHostname
	I0410 22:37:03.714315   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:03.714673   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:41:37", ip: ""} in network mk-cert-expiration-464519: {Iface:virbr4 ExpiryTime:2024-04-10 23:33:30 +0000 UTC Type:0 Mac:52:54:00:50:41:37 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:cert-expiration-464519 Clientid:01:52:54:00:50:41:37}
	I0410 22:37:03.714701   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined IP address 192.168.72.34 and MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:03.714872   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHPort
	I0410 22:37:03.715079   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHKeyPath
	I0410 22:37:03.715285   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHKeyPath
	I0410 22:37:03.715455   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHUsername
	I0410 22:37:03.715628   53086 main.go:141] libmachine: Using SSH client type: native
	I0410 22:37:03.715867   53086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.34 22 <nil> <nil>}
	I0410 22:37:03.715885   53086 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-464519' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-464519/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-464519' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:37:03.843537   53086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:37:03.843578   53086 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:37:03.843600   53086 buildroot.go:174] setting up certificates
	I0410 22:37:03.843610   53086 provision.go:84] configureAuth start
	I0410 22:37:03.843622   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetMachineName
	I0410 22:37:03.843948   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetIP
	I0410 22:37:03.847123   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:03.847525   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:41:37", ip: ""} in network mk-cert-expiration-464519: {Iface:virbr4 ExpiryTime:2024-04-10 23:33:30 +0000 UTC Type:0 Mac:52:54:00:50:41:37 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:cert-expiration-464519 Clientid:01:52:54:00:50:41:37}
	I0410 22:37:03.847546   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined IP address 192.168.72.34 and MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:03.847742   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHHostname
	I0410 22:37:03.850271   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:03.850658   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:41:37", ip: ""} in network mk-cert-expiration-464519: {Iface:virbr4 ExpiryTime:2024-04-10 23:33:30 +0000 UTC Type:0 Mac:52:54:00:50:41:37 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:cert-expiration-464519 Clientid:01:52:54:00:50:41:37}
	I0410 22:37:03.850677   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined IP address 192.168.72.34 and MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:03.850860   53086 provision.go:143] copyHostCerts
	I0410 22:37:03.850923   53086 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:37:03.850941   53086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:37:03.851029   53086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:37:03.851168   53086 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:37:03.851174   53086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:37:03.851220   53086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:37:03.851324   53086 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:37:03.851330   53086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:37:03.851365   53086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:37:03.851452   53086 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-464519 san=[127.0.0.1 192.168.72.34 cert-expiration-464519 localhost minikube]
	I0410 22:37:04.028839   53086 provision.go:177] copyRemoteCerts
	I0410 22:37:04.028889   53086 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:37:04.028913   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHHostname
	I0410 22:37:04.032492   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:04.032921   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:41:37", ip: ""} in network mk-cert-expiration-464519: {Iface:virbr4 ExpiryTime:2024-04-10 23:33:30 +0000 UTC Type:0 Mac:52:54:00:50:41:37 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:cert-expiration-464519 Clientid:01:52:54:00:50:41:37}
	I0410 22:37:04.032947   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined IP address 192.168.72.34 and MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:04.033141   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHPort
	I0410 22:37:04.033343   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHKeyPath
	I0410 22:37:04.033529   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHUsername
	I0410 22:37:04.033685   53086 sshutil.go:53] new ssh client: &{IP:192.168.72.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/cert-expiration-464519/id_rsa Username:docker}
	I0410 22:37:04.131227   53086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:37:04.166019   53086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0410 22:37:04.195031   53086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0410 22:37:04.227661   53086 provision.go:87] duration metric: took 384.041729ms to configureAuth
	I0410 22:37:04.227679   53086 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:37:04.227869   53086 config.go:182] Loaded profile config "cert-expiration-464519": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:37:04.227966   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHHostname
	I0410 22:37:04.231043   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:04.231456   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:41:37", ip: ""} in network mk-cert-expiration-464519: {Iface:virbr4 ExpiryTime:2024-04-10 23:33:30 +0000 UTC Type:0 Mac:52:54:00:50:41:37 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:cert-expiration-464519 Clientid:01:52:54:00:50:41:37}
	I0410 22:37:04.231479   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined IP address 192.168.72.34 and MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:04.231691   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHPort
	I0410 22:37:04.231896   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHKeyPath
	I0410 22:37:04.232065   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHKeyPath
	I0410 22:37:04.232253   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHUsername
	I0410 22:37:04.232444   53086 main.go:141] libmachine: Using SSH client type: native
	I0410 22:37:04.232651   53086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.34 22 <nil> <nil>}
	I0410 22:37:04.232664   53086 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:37:06.090144   52815 pod_ready.go:102] pod "etcd-pause-262675" in "kube-system" namespace has status "Ready":"False"
	I0410 22:37:08.090861   52815 pod_ready.go:102] pod "etcd-pause-262675" in "kube-system" namespace has status "Ready":"False"
	I0410 22:37:10.090373   52815 pod_ready.go:92] pod "etcd-pause-262675" in "kube-system" namespace has status "Ready":"True"
	I0410 22:37:10.090394   52815 pod_ready.go:81] duration metric: took 13.006928664s for pod "etcd-pause-262675" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:10.090404   52815 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-262675" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:10.095592   52815 pod_ready.go:92] pod "kube-apiserver-pause-262675" in "kube-system" namespace has status "Ready":"True"
	I0410 22:37:10.095611   52815 pod_ready.go:81] duration metric: took 5.201297ms for pod "kube-apiserver-pause-262675" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:10.095623   52815 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-262675" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:10.100442   52815 pod_ready.go:92] pod "kube-controller-manager-pause-262675" in "kube-system" namespace has status "Ready":"True"
	I0410 22:37:10.100460   52815 pod_ready.go:81] duration metric: took 4.831274ms for pod "kube-controller-manager-pause-262675" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:10.100469   52815 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5rmsk" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:10.105282   52815 pod_ready.go:92] pod "kube-proxy-5rmsk" in "kube-system" namespace has status "Ready":"True"
	I0410 22:37:10.105298   52815 pod_ready.go:81] duration metric: took 4.823295ms for pod "kube-proxy-5rmsk" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:10.105306   52815 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-262675" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:10.109779   52815 pod_ready.go:92] pod "kube-scheduler-pause-262675" in "kube-system" namespace has status "Ready":"True"
	I0410 22:37:10.109795   52815 pod_ready.go:81] duration metric: took 4.484609ms for pod "kube-scheduler-pause-262675" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:10.109802   52815 pod_ready.go:38] duration metric: took 13.540270367s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:37:10.109817   52815 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0410 22:37:10.122311   52815 ops.go:34] apiserver oom_adj: -16
	I0410 22:37:10.122334   52815 kubeadm.go:591] duration metric: took 23.469633278s to restartPrimaryControlPlane
	I0410 22:37:10.122343   52815 kubeadm.go:393] duration metric: took 23.594788555s to StartCluster
	I0410 22:37:10.122362   52815 settings.go:142] acquiring lock: {Name:mk5dc8e9a07a91433645b19ffba859d70c73be71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:37:10.122441   52815 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:37:10.123372   52815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/kubeconfig: {Name:mkd6f498bec10d7b0d2291f83f5e27766227bc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:37:10.123581   52815 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.144 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0410 22:37:10.125277   52815 out.go:177] * Verifying Kubernetes components...
	I0410 22:37:10.123668   52815 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0410 22:37:10.123866   52815 config.go:182] Loaded profile config "pause-262675": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:37:10.126528   52815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:37:10.128054   52815 out.go:177] * Enabled addons: 
	I0410 22:37:05.535377   53176 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0410 22:37:05.535426   53176 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0410 22:37:05.535438   53176 cache.go:56] Caching tarball of preloaded images
	I0410 22:37:05.535512   53176 preload.go:173] Found /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0410 22:37:05.535523   53176 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I0410 22:37:05.535612   53176 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/stopped-upgrade-546741/config.json ...
	I0410 22:37:05.535794   53176 start.go:360] acquireMachinesLock for stopped-upgrade-546741: {Name:mkcfcfa8b01b63726303cd37d33f01d452118635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0410 22:37:10.138262   53176 start.go:364] duration metric: took 4.602424933s to acquireMachinesLock for "stopped-upgrade-546741"
	I0410 22:37:10.138330   53176 start.go:96] Skipping create...Using existing machine configuration
	I0410 22:37:10.138389   53176 fix.go:54] fixHost starting: 
	I0410 22:37:10.138829   53176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:37:10.138873   53176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:37:10.155935   53176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38727
	I0410 22:37:10.156392   53176 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:37:10.156921   53176 main.go:141] libmachine: Using API Version  1
	I0410 22:37:10.156955   53176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:37:10.157300   53176 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:37:10.157488   53176 main.go:141] libmachine: (stopped-upgrade-546741) Calling .DriverName
	I0410 22:37:10.157657   53176 main.go:141] libmachine: (stopped-upgrade-546741) Calling .GetState
	I0410 22:37:10.159019   53176 fix.go:112] recreateIfNeeded on stopped-upgrade-546741: state=Stopped err=<nil>
	I0410 22:37:10.159053   53176 main.go:141] libmachine: (stopped-upgrade-546741) Calling .DriverName
	W0410 22:37:10.159216   53176 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 22:37:10.161030   53176 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-546741" ...
	I0410 22:37:10.129410   52815 addons.go:505] duration metric: took 5.753328ms for enable addons: enabled=[]
	I0410 22:37:10.162324   53176 main.go:141] libmachine: (stopped-upgrade-546741) Calling .Start
	I0410 22:37:10.162491   53176 main.go:141] libmachine: (stopped-upgrade-546741) Ensuring networks are active...
	I0410 22:37:10.163325   53176 main.go:141] libmachine: (stopped-upgrade-546741) Ensuring network default is active
	I0410 22:37:10.163729   53176 main.go:141] libmachine: (stopped-upgrade-546741) Ensuring network mk-stopped-upgrade-546741 is active
	I0410 22:37:10.164211   53176 main.go:141] libmachine: (stopped-upgrade-546741) Getting domain xml...
	I0410 22:37:10.164972   53176 main.go:141] libmachine: (stopped-upgrade-546741) Creating domain...
	I0410 22:37:09.874591   53086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:37:09.874607   53086 machine.go:97] duration metric: took 6.445496488s to provisionDockerMachine
	I0410 22:37:09.874618   53086 start.go:293] postStartSetup for "cert-expiration-464519" (driver="kvm2")
	I0410 22:37:09.874631   53086 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:37:09.874661   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .DriverName
	I0410 22:37:09.875088   53086 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:37:09.875115   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHHostname
	I0410 22:37:09.878002   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:09.878430   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:41:37", ip: ""} in network mk-cert-expiration-464519: {Iface:virbr4 ExpiryTime:2024-04-10 23:33:30 +0000 UTC Type:0 Mac:52:54:00:50:41:37 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:cert-expiration-464519 Clientid:01:52:54:00:50:41:37}
	I0410 22:37:09.878450   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined IP address 192.168.72.34 and MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:09.878683   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHPort
	I0410 22:37:09.878901   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHKeyPath
	I0410 22:37:09.879077   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHUsername
	I0410 22:37:09.879277   53086 sshutil.go:53] new ssh client: &{IP:192.168.72.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/cert-expiration-464519/id_rsa Username:docker}
	I0410 22:37:09.971846   53086 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:37:09.976830   53086 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:37:09.976844   53086 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:37:09.976902   53086 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:37:09.976975   53086 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:37:09.977052   53086 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:37:09.988123   53086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:37:10.015700   53086 start.go:296] duration metric: took 141.070008ms for postStartSetup
	I0410 22:37:10.015727   53086 fix.go:56] duration metric: took 6.610185428s for fixHost
	I0410 22:37:10.015743   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHHostname
	I0410 22:37:10.018443   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:10.018753   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:41:37", ip: ""} in network mk-cert-expiration-464519: {Iface:virbr4 ExpiryTime:2024-04-10 23:33:30 +0000 UTC Type:0 Mac:52:54:00:50:41:37 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:cert-expiration-464519 Clientid:01:52:54:00:50:41:37}
	I0410 22:37:10.018771   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined IP address 192.168.72.34 and MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:10.018917   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHPort
	I0410 22:37:10.019117   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHKeyPath
	I0410 22:37:10.019315   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHKeyPath
	I0410 22:37:10.019499   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHUsername
	I0410 22:37:10.019652   53086 main.go:141] libmachine: Using SSH client type: native
	I0410 22:37:10.019832   53086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.34 22 <nil> <nil>}
	I0410 22:37:10.019837   53086 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 22:37:10.138119   53086 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712788630.128092378
	
	I0410 22:37:10.138132   53086 fix.go:216] guest clock: 1712788630.128092378
	I0410 22:37:10.138138   53086 fix.go:229] Guest: 2024-04-10 22:37:10.128092378 +0000 UTC Remote: 2024-04-10 22:37:10.015728755 +0000 UTC m=+6.779050279 (delta=112.363623ms)
	I0410 22:37:10.138153   53086 fix.go:200] guest clock delta is within tolerance: 112.363623ms
	I0410 22:37:10.138157   53086 start.go:83] releasing machines lock for "cert-expiration-464519", held for 6.732623629s
	I0410 22:37:10.138188   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .DriverName
	I0410 22:37:10.138439   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetIP
	I0410 22:37:10.141588   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:10.141907   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:41:37", ip: ""} in network mk-cert-expiration-464519: {Iface:virbr4 ExpiryTime:2024-04-10 23:33:30 +0000 UTC Type:0 Mac:52:54:00:50:41:37 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:cert-expiration-464519 Clientid:01:52:54:00:50:41:37}
	I0410 22:37:10.141933   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined IP address 192.168.72.34 and MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:10.142089   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .DriverName
	I0410 22:37:10.142801   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .DriverName
	I0410 22:37:10.143028   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .DriverName
	I0410 22:37:10.143123   53086 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:37:10.143171   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHHostname
	I0410 22:37:10.143264   53086 ssh_runner.go:195] Run: cat /version.json
	I0410 22:37:10.143282   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHHostname
	I0410 22:37:10.145969   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:10.146339   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:41:37", ip: ""} in network mk-cert-expiration-464519: {Iface:virbr4 ExpiryTime:2024-04-10 23:33:30 +0000 UTC Type:0 Mac:52:54:00:50:41:37 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:cert-expiration-464519 Clientid:01:52:54:00:50:41:37}
	I0410 22:37:10.146354   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined IP address 192.168.72.34 and MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:10.146373   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:10.146597   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHPort
	I0410 22:37:10.146766   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHKeyPath
	I0410 22:37:10.146828   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:41:37", ip: ""} in network mk-cert-expiration-464519: {Iface:virbr4 ExpiryTime:2024-04-10 23:33:30 +0000 UTC Type:0 Mac:52:54:00:50:41:37 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:cert-expiration-464519 Clientid:01:52:54:00:50:41:37}
	I0410 22:37:10.146850   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined IP address 192.168.72.34 and MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:10.146944   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHUsername
	I0410 22:37:10.147064   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHPort
	I0410 22:37:10.147115   53086 sshutil.go:53] new ssh client: &{IP:192.168.72.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/cert-expiration-464519/id_rsa Username:docker}
	I0410 22:37:10.147192   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHKeyPath
	I0410 22:37:10.147335   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHUsername
	I0410 22:37:10.147471   53086 sshutil.go:53] new ssh client: &{IP:192.168.72.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/cert-expiration-464519/id_rsa Username:docker}
	I0410 22:37:10.265807   53086 ssh_runner.go:195] Run: systemctl --version
	I0410 22:37:10.272441   53086 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:37:10.686228   53086 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 22:37:10.769481   53086 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:37:10.769529   53086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:37:10.803335   53086 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0410 22:37:10.803352   53086 start.go:494] detecting cgroup driver to use...
	I0410 22:37:10.803415   53086 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:37:10.902128   53086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:37:10.938896   53086 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:37:10.938955   53086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:37:10.991708   53086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:37:11.030920   53086 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:37:11.278880   53086 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:37:11.557635   53086 docker.go:233] disabling docker service ...
	I0410 22:37:11.557693   53086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:37:11.591738   53086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:37:11.623721   53086 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:37:11.839774   53086 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:37:12.058321   53086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:37:12.076097   53086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:37:12.102618   53086 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0410 22:37:12.102672   53086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:37:12.120038   53086 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:37:12.120084   53086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:37:12.137410   53086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:37:12.152861   53086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:37:12.166721   53086 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:37:12.179585   53086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:37:12.192124   53086 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:37:12.204980   53086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:37:12.217760   53086 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:37:12.230158   53086 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:37:12.241590   53086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:37:12.409258   53086 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:37:10.316461   52815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:37:10.340134   52815 node_ready.go:35] waiting up to 6m0s for node "pause-262675" to be "Ready" ...
	I0410 22:37:10.345173   52815 node_ready.go:49] node "pause-262675" has status "Ready":"True"
	I0410 22:37:10.345201   52815 node_ready.go:38] duration metric: took 5.035755ms for node "pause-262675" to be "Ready" ...
	I0410 22:37:10.345213   52815 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:37:10.492763   52815 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-ngdgs" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:10.889352   52815 pod_ready.go:92] pod "coredns-76f75df574-ngdgs" in "kube-system" namespace has status "Ready":"True"
	I0410 22:37:10.889384   52815 pod_ready.go:81] duration metric: took 396.589751ms for pod "coredns-76f75df574-ngdgs" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:10.889396   52815 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-262675" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:11.289819   52815 pod_ready.go:92] pod "etcd-pause-262675" in "kube-system" namespace has status "Ready":"True"
	I0410 22:37:11.289844   52815 pod_ready.go:81] duration metric: took 400.438701ms for pod "etcd-pause-262675" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:11.289856   52815 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-262675" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:11.689214   52815 pod_ready.go:92] pod "kube-apiserver-pause-262675" in "kube-system" namespace has status "Ready":"True"
	I0410 22:37:11.689246   52815 pod_ready.go:81] duration metric: took 399.381763ms for pod "kube-apiserver-pause-262675" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:11.689266   52815 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-262675" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:12.089017   52815 pod_ready.go:92] pod "kube-controller-manager-pause-262675" in "kube-system" namespace has status "Ready":"True"
	I0410 22:37:12.089041   52815 pod_ready.go:81] duration metric: took 399.764754ms for pod "kube-controller-manager-pause-262675" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:12.089054   52815 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5rmsk" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:12.489063   52815 pod_ready.go:92] pod "kube-proxy-5rmsk" in "kube-system" namespace has status "Ready":"True"
	I0410 22:37:12.489088   52815 pod_ready.go:81] duration metric: took 400.026407ms for pod "kube-proxy-5rmsk" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:12.489097   52815 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-262675" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:12.889765   52815 pod_ready.go:92] pod "kube-scheduler-pause-262675" in "kube-system" namespace has status "Ready":"True"
	I0410 22:37:12.889789   52815 pod_ready.go:81] duration metric: took 400.684751ms for pod "kube-scheduler-pause-262675" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:12.889799   52815 pod_ready.go:38] duration metric: took 2.544574728s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:37:12.889815   52815 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:37:12.889871   52815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:37:12.906462   52815 api_server.go:72] duration metric: took 2.782852962s to wait for apiserver process to appear ...
	I0410 22:37:12.906499   52815 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:37:12.906523   52815 api_server.go:253] Checking apiserver healthz at https://192.168.50.144:8443/healthz ...
	I0410 22:37:12.912070   52815 api_server.go:279] https://192.168.50.144:8443/healthz returned 200:
	ok
	I0410 22:37:12.913283   52815 api_server.go:141] control plane version: v1.29.3
	I0410 22:37:12.913310   52815 api_server.go:131] duration metric: took 6.802595ms to wait for apiserver health ...
	I0410 22:37:12.913322   52815 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:37:13.089996   52815 system_pods.go:59] 6 kube-system pods found
	I0410 22:37:13.090021   52815 system_pods.go:61] "coredns-76f75df574-ngdgs" [34376a83-ecec-4874-8fa9-653b3ba7a8fb] Running
	I0410 22:37:13.090026   52815 system_pods.go:61] "etcd-pause-262675" [2539bb29-407d-49c6-be2b-4b462715f551] Running
	I0410 22:37:13.090030   52815 system_pods.go:61] "kube-apiserver-pause-262675" [b4e1eb76-b1c8-4f96-8158-8887bd29b7c5] Running
	I0410 22:37:13.090033   52815 system_pods.go:61] "kube-controller-manager-pause-262675" [2383c61b-ce86-41e0-afd7-2d17d5377563] Running
	I0410 22:37:13.090038   52815 system_pods.go:61] "kube-proxy-5rmsk" [0e7d0245-1820-426e-8a54-a1df3db2c2a4] Running
	I0410 22:37:13.090041   52815 system_pods.go:61] "kube-scheduler-pause-262675" [cfcc8a79-4a26-4a6b-95a5-0d026d28eec3] Running
	I0410 22:37:13.090047   52815 system_pods.go:74] duration metric: took 176.719207ms to wait for pod list to return data ...
	I0410 22:37:13.090053   52815 default_sa.go:34] waiting for default service account to be created ...
	I0410 22:37:13.289539   52815 default_sa.go:45] found service account: "default"
	I0410 22:37:13.289570   52815 default_sa.go:55] duration metric: took 199.510016ms for default service account to be created ...
	I0410 22:37:13.289582   52815 system_pods.go:116] waiting for k8s-apps to be running ...
	I0410 22:37:13.493836   52815 system_pods.go:86] 6 kube-system pods found
	I0410 22:37:13.493861   52815 system_pods.go:89] "coredns-76f75df574-ngdgs" [34376a83-ecec-4874-8fa9-653b3ba7a8fb] Running
	I0410 22:37:13.493866   52815 system_pods.go:89] "etcd-pause-262675" [2539bb29-407d-49c6-be2b-4b462715f551] Running
	I0410 22:37:13.493870   52815 system_pods.go:89] "kube-apiserver-pause-262675" [b4e1eb76-b1c8-4f96-8158-8887bd29b7c5] Running
	I0410 22:37:13.493874   52815 system_pods.go:89] "kube-controller-manager-pause-262675" [2383c61b-ce86-41e0-afd7-2d17d5377563] Running
	I0410 22:37:13.493878   52815 system_pods.go:89] "kube-proxy-5rmsk" [0e7d0245-1820-426e-8a54-a1df3db2c2a4] Running
	I0410 22:37:13.493882   52815 system_pods.go:89] "kube-scheduler-pause-262675" [cfcc8a79-4a26-4a6b-95a5-0d026d28eec3] Running
	I0410 22:37:13.493888   52815 system_pods.go:126] duration metric: took 204.299925ms to wait for k8s-apps to be running ...
	I0410 22:37:13.493896   52815 system_svc.go:44] waiting for kubelet service to be running ....
	I0410 22:37:13.493937   52815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:37:13.521435   52815 system_svc.go:56] duration metric: took 27.509543ms WaitForService to wait for kubelet
	I0410 22:37:13.521479   52815 kubeadm.go:576] duration metric: took 3.397871482s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 22:37:13.521512   52815 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:37:13.688311   52815 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:37:13.688344   52815 node_conditions.go:123] node cpu capacity is 2
	I0410 22:37:13.688361   52815 node_conditions.go:105] duration metric: took 166.839352ms to run NodePressure ...
	I0410 22:37:13.688374   52815 start.go:240] waiting for startup goroutines ...
	I0410 22:37:13.688385   52815 start.go:245] waiting for cluster config update ...
	I0410 22:37:13.688409   52815 start.go:254] writing updated cluster config ...
	I0410 22:37:13.688798   52815 ssh_runner.go:195] Run: rm -f paused
	I0410 22:37:13.738564   52815 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0410 22:37:13.740852   52815 out.go:177] * Done! kubectl is now configured to use "pause-262675" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 10 22:37:14 pause-262675 crio[2878]: time="2024-04-10 22:37:14.490389979Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a257ef61-0a43-4a46-b2b7-22988048f02a name=/runtime.v1.RuntimeService/Version
	Apr 10 22:37:14 pause-262675 crio[2878]: time="2024-04-10 22:37:14.491750811Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=86c22389-cefb-4cbe-a71a-fc32bcd3b566 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:37:14 pause-262675 crio[2878]: time="2024-04-10 22:37:14.492117558Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712788634492092121,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=86c22389-cefb-4cbe-a71a-fc32bcd3b566 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:37:14 pause-262675 crio[2878]: time="2024-04-10 22:37:14.492810039Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0687af3b-7e0b-4d1d-b8d2-e0b5f716b75a name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:37:14 pause-262675 crio[2878]: time="2024-04-10 22:37:14.492861454Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0687af3b-7e0b-4d1d-b8d2-e0b5f716b75a name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:37:14 pause-262675 crio[2878]: time="2024-04-10 22:37:14.493114980Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fbebdd4cfc4eca7e3dc8fb95f76c48ebd519ec863c867415e510b74e750d3c38,PodSandboxId:837cf686abcff65dc7b800b587b093a2081b1fa50952fae4b0c85a62348bdb83,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712788616260958041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ngdgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34376a83-ecec-4874-8fa9-653b3ba7a8fb,},Annotations:map[string]string{io.kubernetes.container.hash: cfd390af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c404daec01843347ac717ed0e35a18a21e313ef844ff10fa1c6d555de5c1aa3d,PodSandboxId:555e291ef738bf0a255cfc5ab3adac30c356b538e791098ec1b55f853970875b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712788615811599269,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5rmsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 0e7d0245-1820-426e-8a54-a1df3db2c2a4,},Annotations:map[string]string{io.kubernetes.container.hash: 182b02b1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b1ca2a6bb13e3d051425ba5ffd9d607aff8b3e63d63b4f36c2b70492432712,PodSandboxId:429cbb872f578512fe53c1ece73ed78a320d7c1afd42cf4ef85d4fda4a80289a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712788611029673869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbca0d000d237fc40d9ef2ad258eb84,},Annot
ations:map[string]string{io.kubernetes.container.hash: 356c006c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45b4c8ba68eda49b3fc10e4b005785a90c5375c5d84d4f3c8c3290cffdc9b02f,PodSandboxId:8658846bdf398d3076e97d4f6b0a1407ea671749947a495e4d2870f126e9c8e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712788611014096175,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79f2270031ca8da755169edef48bb8
1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61456649ac3ae6a328789ca7aacd71dbf572c1e2af8b98dc1ebc2a7b6dc63fdd,PodSandboxId:556d6b70f03b84d0911d9231a7d6c440b06c57077911a1d7635bb466753a8e61,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712788611027940741,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300254c7602c01a47d7c7b015d2c108b,},Annotations:ma
p[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10329215440e7807f796ae1783552d6b7498727ff904355fafaa66a8e1c74966,PodSandboxId:f80b1894e84960553130cd0e87d1d81676afccccc9ce5aa3d55a571b17bdd3bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712788610998915008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de360ec12402802bf8613b64a97ba7a,},Annotations:map[string]string{io
.kubernetes.container.hash: 7b34b2b8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d859f4900e3c62aaeb3e33febb019653b89218111578d09f1acc555d215905ee,PodSandboxId:555e291ef738bf0a255cfc5ab3adac30c356b538e791098ec1b55f853970875b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712788606208819446,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5rmsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e7d0245-1820-426e-8a54-a1df3db2c2a4,},Annotations:map[string]string{io.kubernetes.container.hash: 182b02
b1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55b7d5868c6fde665f9fae9d132c63b6f5753ee0ea0360723096c2fd8273c5e1,PodSandboxId:101f9e02523f5940b556e93bd5de9c016dae495e23cb2143cb675275d1a51054,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712788604241700297,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbca0d000d237fc40d9ef2ad258eb84,},Annotations:map[string]string{io.kubernetes.container.hash: 356c006c,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7919dde648e62a3b6cfbaa50cf895abdaaef02d4fb09b084593d54b66a62c43c,PodSandboxId:b19e372d535607bb8671b31b11dd0453a68ab8c49c31de40fc1e84a679f82352,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712788604115576044,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79f2270031ca8da755169edef48bb81,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d5d2920fb5e11e0bbd6eb4650d80e25e82853103b66f052f118117af38bb641,PodSandboxId:1e5e7041edf50f9ed552e15076c03d560d52bdcdc2d2c6275a6671e01ea44248,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712788604142152927,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de360ec12402802bf8613b64a97ba7a,},Annotations:map[string]string{io.kubernetes.container.hash: 7b34b2b8,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ec4d9a0df94625ef5e7a4bd667bf67d19db4d6651500972577fa4eb5a3cb3ea,PodSandboxId:271b003eed7c9398b993bdec7fb35adffe097b6c65a75a2ede7a454185d7b4c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712788604044386398,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300254c7602c01a47d7c7b015d2c108b,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae010ac08bc1411c206e818b06d0d211b7506c5fef244d38698f4920531d794,PodSandboxId:e5edb36519bc4a1aef8e68b54a0ac0c763748d9d23385619b19f20200fe963bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712788544036674650,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ngdgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34376a83-ecec-4874-8fa9-653b3ba7a8fb,},Annotations:map[string]string{io.kubernetes.container.hash: cfd390af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0687af3b-7e0b-4d1d-b8d2-e0b5f716b75a name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:37:14 pause-262675 crio[2878]: time="2024-04-10 22:37:14.538669543Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e1669840-588e-40c7-835a-5bfef8a08eb7 name=/runtime.v1.RuntimeService/Version
	Apr 10 22:37:14 pause-262675 crio[2878]: time="2024-04-10 22:37:14.538749727Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e1669840-588e-40c7-835a-5bfef8a08eb7 name=/runtime.v1.RuntimeService/Version
	Apr 10 22:37:14 pause-262675 crio[2878]: time="2024-04-10 22:37:14.540618645Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7217811b-cc8f-4581-b614-a6d49e5957ee name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:37:14 pause-262675 crio[2878]: time="2024-04-10 22:37:14.540980882Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712788634540958770,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7217811b-cc8f-4581-b614-a6d49e5957ee name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:37:14 pause-262675 crio[2878]: time="2024-04-10 22:37:14.541556090Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b4ed2f0-28d8-44a1-b418-eb901d706f3e name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:37:14 pause-262675 crio[2878]: time="2024-04-10 22:37:14.541621363Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b4ed2f0-28d8-44a1-b418-eb901d706f3e name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:37:14 pause-262675 crio[2878]: time="2024-04-10 22:37:14.541913250Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fbebdd4cfc4eca7e3dc8fb95f76c48ebd519ec863c867415e510b74e750d3c38,PodSandboxId:837cf686abcff65dc7b800b587b093a2081b1fa50952fae4b0c85a62348bdb83,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712788616260958041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ngdgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34376a83-ecec-4874-8fa9-653b3ba7a8fb,},Annotations:map[string]string{io.kubernetes.container.hash: cfd390af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c404daec01843347ac717ed0e35a18a21e313ef844ff10fa1c6d555de5c1aa3d,PodSandboxId:555e291ef738bf0a255cfc5ab3adac30c356b538e791098ec1b55f853970875b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712788615811599269,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5rmsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 0e7d0245-1820-426e-8a54-a1df3db2c2a4,},Annotations:map[string]string{io.kubernetes.container.hash: 182b02b1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b1ca2a6bb13e3d051425ba5ffd9d607aff8b3e63d63b4f36c2b70492432712,PodSandboxId:429cbb872f578512fe53c1ece73ed78a320d7c1afd42cf4ef85d4fda4a80289a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712788611029673869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbca0d000d237fc40d9ef2ad258eb84,},Annot
ations:map[string]string{io.kubernetes.container.hash: 356c006c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45b4c8ba68eda49b3fc10e4b005785a90c5375c5d84d4f3c8c3290cffdc9b02f,PodSandboxId:8658846bdf398d3076e97d4f6b0a1407ea671749947a495e4d2870f126e9c8e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712788611014096175,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79f2270031ca8da755169edef48bb8
1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61456649ac3ae6a328789ca7aacd71dbf572c1e2af8b98dc1ebc2a7b6dc63fdd,PodSandboxId:556d6b70f03b84d0911d9231a7d6c440b06c57077911a1d7635bb466753a8e61,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712788611027940741,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300254c7602c01a47d7c7b015d2c108b,},Annotations:ma
p[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10329215440e7807f796ae1783552d6b7498727ff904355fafaa66a8e1c74966,PodSandboxId:f80b1894e84960553130cd0e87d1d81676afccccc9ce5aa3d55a571b17bdd3bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712788610998915008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de360ec12402802bf8613b64a97ba7a,},Annotations:map[string]string{io
.kubernetes.container.hash: 7b34b2b8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d859f4900e3c62aaeb3e33febb019653b89218111578d09f1acc555d215905ee,PodSandboxId:555e291ef738bf0a255cfc5ab3adac30c356b538e791098ec1b55f853970875b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712788606208819446,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5rmsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e7d0245-1820-426e-8a54-a1df3db2c2a4,},Annotations:map[string]string{io.kubernetes.container.hash: 182b02
b1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55b7d5868c6fde665f9fae9d132c63b6f5753ee0ea0360723096c2fd8273c5e1,PodSandboxId:101f9e02523f5940b556e93bd5de9c016dae495e23cb2143cb675275d1a51054,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712788604241700297,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbca0d000d237fc40d9ef2ad258eb84,},Annotations:map[string]string{io.kubernetes.container.hash: 356c006c,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7919dde648e62a3b6cfbaa50cf895abdaaef02d4fb09b084593d54b66a62c43c,PodSandboxId:b19e372d535607bb8671b31b11dd0453a68ab8c49c31de40fc1e84a679f82352,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712788604115576044,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79f2270031ca8da755169edef48bb81,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d5d2920fb5e11e0bbd6eb4650d80e25e82853103b66f052f118117af38bb641,PodSandboxId:1e5e7041edf50f9ed552e15076c03d560d52bdcdc2d2c6275a6671e01ea44248,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712788604142152927,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de360ec12402802bf8613b64a97ba7a,},Annotations:map[string]string{io.kubernetes.container.hash: 7b34b2b8,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ec4d9a0df94625ef5e7a4bd667bf67d19db4d6651500972577fa4eb5a3cb3ea,PodSandboxId:271b003eed7c9398b993bdec7fb35adffe097b6c65a75a2ede7a454185d7b4c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712788604044386398,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300254c7602c01a47d7c7b015d2c108b,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae010ac08bc1411c206e818b06d0d211b7506c5fef244d38698f4920531d794,PodSandboxId:e5edb36519bc4a1aef8e68b54a0ac0c763748d9d23385619b19f20200fe963bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712788544036674650,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ngdgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34376a83-ecec-4874-8fa9-653b3ba7a8fb,},Annotations:map[string]string{io.kubernetes.container.hash: cfd390af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9b4ed2f0-28d8-44a1-b418-eb901d706f3e name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:37:14 pause-262675 crio[2878]: time="2024-04-10 22:37:14.574580371Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f5fed473-8f12-43b8-8132-97cf386304a4 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 10 22:37:14 pause-262675 crio[2878]: time="2024-04-10 22:37:14.574784162Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:837cf686abcff65dc7b800b587b093a2081b1fa50952fae4b0c85a62348bdb83,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-ngdgs,Uid:34376a83-ecec-4874-8fa9-653b3ba7a8fb,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1712788615823141491,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-ngdgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34376a83-ecec-4874-8fa9-653b3ba7a8fb,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-10T22:36:55.496705217Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:555e291ef738bf0a255cfc5ab3adac30c356b538e791098ec1b55f853970875b,Metadata:&PodSandboxMetadata{Name:kube-proxy-5rmsk,Uid:0e7d0245-1820-426e-8a54-a1df3db2c2a4,Namespace:kube-system,Attempt
:2,},State:SANDBOX_READY,CreatedAt:1712788605860850409,Labels:map[string]string{controller-revision-hash: 7659797656,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-5rmsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e7d0245-1820-426e-8a54-a1df3db2c2a4,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-10T22:35:42.900948899Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:556d6b70f03b84d0911d9231a7d6c440b06c57077911a1d7635bb466753a8e61,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-262675,Uid:300254c7602c01a47d7c7b015d2c108b,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1712788605843998119,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300254c7602c01a47d7c7b015d2c108b,tier: control-plane,},Annotations:map[string
]string{kubernetes.io/config.hash: 300254c7602c01a47d7c7b015d2c108b,kubernetes.io/config.seen: 2024-04-10T22:35:30.222074129Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:429cbb872f578512fe53c1ece73ed78a320d7c1afd42cf4ef85d4fda4a80289a,Metadata:&PodSandboxMetadata{Name:etcd-pause-262675,Uid:abbca0d000d237fc40d9ef2ad258eb84,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1712788605792083571,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbca0d000d237fc40d9ef2ad258eb84,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.144:2379,kubernetes.io/config.hash: abbca0d000d237fc40d9ef2ad258eb84,kubernetes.io/config.seen: 2024-04-10T22:35:30.222075102Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f80b1894e84960553130cd0e87d1d81676afccccc9ce5aa3d55a571b17bd
d3bd,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-262675,Uid:1de360ec12402802bf8613b64a97ba7a,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1712788605784633558,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de360ec12402802bf8613b64a97ba7a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.144:8443,kubernetes.io/config.hash: 1de360ec12402802bf8613b64a97ba7a,kubernetes.io/config.seen: 2024-04-10T22:35:30.222068666Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8658846bdf398d3076e97d4f6b0a1407ea671749947a495e4d2870f126e9c8e1,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-262675,Uid:d79f2270031ca8da755169edef48bb81,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1712788605752083546,Labels:map[string]str
ing{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79f2270031ca8da755169edef48bb81,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d79f2270031ca8da755169edef48bb81,kubernetes.io/config.seen: 2024-04-10T22:35:30.222072900Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f5fed473-8f12-43b8-8132-97cf386304a4 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 10 22:37:14 pause-262675 crio[2878]: time="2024-04-10 22:37:14.575946107Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=08b8b316-4391-41f1-b7e1-ce28dc8402a7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:37:14 pause-262675 crio[2878]: time="2024-04-10 22:37:14.576034230Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=08b8b316-4391-41f1-b7e1-ce28dc8402a7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:37:14 pause-262675 crio[2878]: time="2024-04-10 22:37:14.576197826Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fbebdd4cfc4eca7e3dc8fb95f76c48ebd519ec863c867415e510b74e750d3c38,PodSandboxId:837cf686abcff65dc7b800b587b093a2081b1fa50952fae4b0c85a62348bdb83,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712788616260958041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ngdgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34376a83-ecec-4874-8fa9-653b3ba7a8fb,},Annotations:map[string]string{io.kubernetes.container.hash: cfd390af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c404daec01843347ac717ed0e35a18a21e313ef844ff10fa1c6d555de5c1aa3d,PodSandboxId:555e291ef738bf0a255cfc5ab3adac30c356b538e791098ec1b55f853970875b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712788615811599269,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5rmsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 0e7d0245-1820-426e-8a54-a1df3db2c2a4,},Annotations:map[string]string{io.kubernetes.container.hash: 182b02b1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b1ca2a6bb13e3d051425ba5ffd9d607aff8b3e63d63b4f36c2b70492432712,PodSandboxId:429cbb872f578512fe53c1ece73ed78a320d7c1afd42cf4ef85d4fda4a80289a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712788611029673869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbca0d000d237fc40d9ef2ad258eb84,},Annot
ations:map[string]string{io.kubernetes.container.hash: 356c006c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45b4c8ba68eda49b3fc10e4b005785a90c5375c5d84d4f3c8c3290cffdc9b02f,PodSandboxId:8658846bdf398d3076e97d4f6b0a1407ea671749947a495e4d2870f126e9c8e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712788611014096175,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79f2270031ca8da755169edef48bb8
1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61456649ac3ae6a328789ca7aacd71dbf572c1e2af8b98dc1ebc2a7b6dc63fdd,PodSandboxId:556d6b70f03b84d0911d9231a7d6c440b06c57077911a1d7635bb466753a8e61,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712788611027940741,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300254c7602c01a47d7c7b015d2c108b,},Annotations:ma
p[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10329215440e7807f796ae1783552d6b7498727ff904355fafaa66a8e1c74966,PodSandboxId:f80b1894e84960553130cd0e87d1d81676afccccc9ce5aa3d55a571b17bdd3bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712788610998915008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de360ec12402802bf8613b64a97ba7a,},Annotations:map[string]string{io
.kubernetes.container.hash: 7b34b2b8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=08b8b316-4391-41f1-b7e1-ce28dc8402a7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:37:14 pause-262675 crio[2878]: time="2024-04-10 22:37:14.587461665Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8ccc3e38-6fd3-4b36-9b79-3f4556868836 name=/runtime.v1.RuntimeService/Version
	Apr 10 22:37:14 pause-262675 crio[2878]: time="2024-04-10 22:37:14.587800628Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8ccc3e38-6fd3-4b36-9b79-3f4556868836 name=/runtime.v1.RuntimeService/Version
	Apr 10 22:37:14 pause-262675 crio[2878]: time="2024-04-10 22:37:14.589355900Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d2a8ca3a-7410-4223-88c1-d2fde7812838 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:37:14 pause-262675 crio[2878]: time="2024-04-10 22:37:14.589760850Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712788634589739246,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2a8ca3a-7410-4223-88c1-d2fde7812838 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:37:14 pause-262675 crio[2878]: time="2024-04-10 22:37:14.590669792Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d78601a-9ebd-4341-84e6-5df5e061144d name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:37:14 pause-262675 crio[2878]: time="2024-04-10 22:37:14.590735995Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d78601a-9ebd-4341-84e6-5df5e061144d name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:37:14 pause-262675 crio[2878]: time="2024-04-10 22:37:14.591683328Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fbebdd4cfc4eca7e3dc8fb95f76c48ebd519ec863c867415e510b74e750d3c38,PodSandboxId:837cf686abcff65dc7b800b587b093a2081b1fa50952fae4b0c85a62348bdb83,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712788616260958041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ngdgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34376a83-ecec-4874-8fa9-653b3ba7a8fb,},Annotations:map[string]string{io.kubernetes.container.hash: cfd390af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c404daec01843347ac717ed0e35a18a21e313ef844ff10fa1c6d555de5c1aa3d,PodSandboxId:555e291ef738bf0a255cfc5ab3adac30c356b538e791098ec1b55f853970875b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712788615811599269,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5rmsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 0e7d0245-1820-426e-8a54-a1df3db2c2a4,},Annotations:map[string]string{io.kubernetes.container.hash: 182b02b1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b1ca2a6bb13e3d051425ba5ffd9d607aff8b3e63d63b4f36c2b70492432712,PodSandboxId:429cbb872f578512fe53c1ece73ed78a320d7c1afd42cf4ef85d4fda4a80289a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712788611029673869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbca0d000d237fc40d9ef2ad258eb84,},Annot
ations:map[string]string{io.kubernetes.container.hash: 356c006c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45b4c8ba68eda49b3fc10e4b005785a90c5375c5d84d4f3c8c3290cffdc9b02f,PodSandboxId:8658846bdf398d3076e97d4f6b0a1407ea671749947a495e4d2870f126e9c8e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712788611014096175,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79f2270031ca8da755169edef48bb8
1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61456649ac3ae6a328789ca7aacd71dbf572c1e2af8b98dc1ebc2a7b6dc63fdd,PodSandboxId:556d6b70f03b84d0911d9231a7d6c440b06c57077911a1d7635bb466753a8e61,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712788611027940741,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300254c7602c01a47d7c7b015d2c108b,},Annotations:ma
p[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10329215440e7807f796ae1783552d6b7498727ff904355fafaa66a8e1c74966,PodSandboxId:f80b1894e84960553130cd0e87d1d81676afccccc9ce5aa3d55a571b17bdd3bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712788610998915008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de360ec12402802bf8613b64a97ba7a,},Annotations:map[string]string{io
.kubernetes.container.hash: 7b34b2b8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d859f4900e3c62aaeb3e33febb019653b89218111578d09f1acc555d215905ee,PodSandboxId:555e291ef738bf0a255cfc5ab3adac30c356b538e791098ec1b55f853970875b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712788606208819446,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5rmsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e7d0245-1820-426e-8a54-a1df3db2c2a4,},Annotations:map[string]string{io.kubernetes.container.hash: 182b02
b1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55b7d5868c6fde665f9fae9d132c63b6f5753ee0ea0360723096c2fd8273c5e1,PodSandboxId:101f9e02523f5940b556e93bd5de9c016dae495e23cb2143cb675275d1a51054,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712788604241700297,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbca0d000d237fc40d9ef2ad258eb84,},Annotations:map[string]string{io.kubernetes.container.hash: 356c006c,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7919dde648e62a3b6cfbaa50cf895abdaaef02d4fb09b084593d54b66a62c43c,PodSandboxId:b19e372d535607bb8671b31b11dd0453a68ab8c49c31de40fc1e84a679f82352,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712788604115576044,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79f2270031ca8da755169edef48bb81,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d5d2920fb5e11e0bbd6eb4650d80e25e82853103b66f052f118117af38bb641,PodSandboxId:1e5e7041edf50f9ed552e15076c03d560d52bdcdc2d2c6275a6671e01ea44248,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712788604142152927,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de360ec12402802bf8613b64a97ba7a,},Annotations:map[string]string{io.kubernetes.container.hash: 7b34b2b8,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ec4d9a0df94625ef5e7a4bd667bf67d19db4d6651500972577fa4eb5a3cb3ea,PodSandboxId:271b003eed7c9398b993bdec7fb35adffe097b6c65a75a2ede7a454185d7b4c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712788604044386398,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300254c7602c01a47d7c7b015d2c108b,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae010ac08bc1411c206e818b06d0d211b7506c5fef244d38698f4920531d794,PodSandboxId:e5edb36519bc4a1aef8e68b54a0ac0c763748d9d23385619b19f20200fe963bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712788544036674650,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ngdgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34376a83-ecec-4874-8fa9-653b3ba7a8fb,},Annotations:map[string]string{io.kubernetes.container.hash: cfd390af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d78601a-9ebd-4341-84e6-5df5e061144d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	fbebdd4cfc4ec       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 seconds ago       Running             coredns                   1                   837cf686abcff       coredns-76f75df574-ngdgs
	c404daec01843       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   18 seconds ago       Running             kube-proxy                2                   555e291ef738b       kube-proxy-5rmsk
	b1b1ca2a6bb13       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   23 seconds ago       Running             etcd                      2                   429cbb872f578       etcd-pause-262675
	61456649ac3ae       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   23 seconds ago       Running             kube-scheduler            2                   556d6b70f03b8       kube-scheduler-pause-262675
	45b4c8ba68eda       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   23 seconds ago       Running             kube-controller-manager   2                   8658846bdf398       kube-controller-manager-pause-262675
	10329215440e7       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   23 seconds ago       Running             kube-apiserver            2                   f80b1894e8496       kube-apiserver-pause-262675
	d859f4900e3c6       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   28 seconds ago       Exited              kube-proxy                1                   555e291ef738b       kube-proxy-5rmsk
	55b7d5868c6fd       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   30 seconds ago       Exited              etcd                      1                   101f9e02523f5       etcd-pause-262675
	0d5d2920fb5e1       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   30 seconds ago       Exited              kube-apiserver            1                   1e5e7041edf50       kube-apiserver-pause-262675
	7919dde648e62       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   30 seconds ago       Exited              kube-controller-manager   1                   b19e372d53560       kube-controller-manager-pause-262675
	4ec4d9a0df946       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   30 seconds ago       Exited              kube-scheduler            1                   271b003eed7c9       kube-scheduler-pause-262675
	dae010ac08bc1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   e5edb36519bc4       coredns-76f75df574-ngdgs
	
	
	==> coredns [dae010ac08bc1411c206e818b06d0d211b7506c5fef244d38698f4920531d794] <==
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[605284717]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Apr-2024 22:35:44.411) (total time: 30004ms):
	Trace[605284717]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (22:36:14.414)
	Trace[605284717]: [30.004976889s] [30.004976889s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[93682609]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Apr-2024 22:35:44.411) (total time: 30004ms):
	Trace[93682609]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (22:36:14.415)
	Trace[93682609]: [30.004805194s] [30.004805194s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[460900663]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Apr-2024 22:35:44.414) (total time: 30002ms):
	Trace[460900663]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (22:36:14.415)
	Trace[460900663]: [30.002659203s] [30.002659203s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	[INFO] 127.0.0.1:36041 - 31939 "HINFO IN 6584311188470642347.8290014462332789187. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009824263s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [fbebdd4cfc4eca7e3dc8fb95f76c48ebd519ec863c867415e510b74e750d3c38] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58098 - 51747 "HINFO IN 2299870771224363883.213556319773758455. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.00800984s
	
	
	==> describe nodes <==
	Name:               pause-262675
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-262675
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2
	                    minikube.k8s.io/name=pause-262675
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_10T22_35_30_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Apr 2024 22:35:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-262675
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Apr 2024 22:37:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Apr 2024 22:36:54 +0000   Wed, 10 Apr 2024 22:35:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Apr 2024 22:36:54 +0000   Wed, 10 Apr 2024 22:35:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Apr 2024 22:36:54 +0000   Wed, 10 Apr 2024 22:35:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Apr 2024 22:36:54 +0000   Wed, 10 Apr 2024 22:35:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.144
	  Hostname:    pause-262675
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 4f029c4663934ce8a0b7057e37231fe4
	  System UUID:                4f029c46-6393-4ce8-a0b7-057e37231fe4
	  Boot ID:                    b4b87e6c-05f4-4a14-8ff6-6daa0be024c1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-ngdgs                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     91s
	  kube-system                 etcd-pause-262675                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         104s
	  kube-system                 kube-apiserver-pause-262675             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 kube-controller-manager-pause-262675    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-proxy-5rmsk                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 kube-scheduler-pause-262675             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 90s                kube-proxy       
	  Normal  Starting                 18s                kube-proxy       
	  Normal  NodeHasSufficientPID     104s               kubelet          Node pause-262675 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  104s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  104s               kubelet          Node pause-262675 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s               kubelet          Node pause-262675 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                104s               kubelet          Node pause-262675 status is now: NodeReady
	  Normal  Starting                 104s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           93s                node-controller  Node pause-262675 event: Registered Node pause-262675 in Controller
	  Normal  Starting                 24s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node pause-262675 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node pause-262675 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node pause-262675 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7s                 node-controller  Node pause-262675 event: Registered Node pause-262675 in Controller
	
	
	==> dmesg <==
	[  +0.062198] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.080923] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.203694] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.153204] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.320064] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.772668] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +0.070450] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.931993] systemd-fstab-generator[940]: Ignoring "noauto" option for root device
	[  +1.164325] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.175449] systemd-fstab-generator[1275]: Ignoring "noauto" option for root device
	[  +0.081200] kauditd_printk_skb: 30 callbacks suppressed
	[ +12.851405] systemd-fstab-generator[1488]: Ignoring "noauto" option for root device
	[  +0.138694] kauditd_printk_skb: 21 callbacks suppressed
	[Apr10 22:36] kauditd_printk_skb: 96 callbacks suppressed
	[ +19.413607] systemd-fstab-generator[2355]: Ignoring "noauto" option for root device
	[  +0.155127] systemd-fstab-generator[2367]: Ignoring "noauto" option for root device
	[  +0.201353] systemd-fstab-generator[2381]: Ignoring "noauto" option for root device
	[  +0.153474] systemd-fstab-generator[2393]: Ignoring "noauto" option for root device
	[  +0.830747] systemd-fstab-generator[2637]: Ignoring "noauto" option for root device
	[  +1.211255] systemd-fstab-generator[2983]: Ignoring "noauto" option for root device
	[  +4.802557] systemd-fstab-generator[3350]: Ignoring "noauto" option for root device
	[  +0.077097] kauditd_printk_skb: 221 callbacks suppressed
	[  +5.531868] kauditd_printk_skb: 38 callbacks suppressed
	[Apr10 22:37] kauditd_printk_skb: 14 callbacks suppressed
	[  +2.629119] systemd-fstab-generator[3879]: Ignoring "noauto" option for root device
	
	
	==> etcd [55b7d5868c6fde665f9fae9d132c63b6f5753ee0ea0360723096c2fd8273c5e1] <==
	
	
	==> etcd [b1b1ca2a6bb13e3d051425ba5ffd9d607aff8b3e63d63b4f36c2b70492432712] <==
	{"level":"info","ts":"2024-04-10T22:36:51.470551Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-10T22:36:51.469773Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-04-10T22:36:51.469909Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-10T22:36:51.470646Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-10T22:36:51.470673Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-10T22:36:51.470185Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"89eeab852c889a12 switched to configuration voters=(9939070016119413266)"}
	{"level":"info","ts":"2024-04-10T22:36:51.471962Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd276b60e5eb7d71","local-member-id":"89eeab852c889a12","added-peer-id":"89eeab852c889a12","added-peer-peer-urls":["https://192.168.50.144:2380"]}
	{"level":"info","ts":"2024-04-10T22:36:51.472102Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd276b60e5eb7d71","local-member-id":"89eeab852c889a12","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-10T22:36:51.472149Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-10T22:36:51.470331Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.144:2380"}
	{"level":"info","ts":"2024-04-10T22:36:51.47463Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.144:2380"}
	{"level":"info","ts":"2024-04-10T22:36:53.252452Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"89eeab852c889a12 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-10T22:36:53.252525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"89eeab852c889a12 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-10T22:36:53.25259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"89eeab852c889a12 received MsgPreVoteResp from 89eeab852c889a12 at term 2"}
	{"level":"info","ts":"2024-04-10T22:36:53.252608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"89eeab852c889a12 became candidate at term 3"}
	{"level":"info","ts":"2024-04-10T22:36:53.252617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"89eeab852c889a12 received MsgVoteResp from 89eeab852c889a12 at term 3"}
	{"level":"info","ts":"2024-04-10T22:36:53.252629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"89eeab852c889a12 became leader at term 3"}
	{"level":"info","ts":"2024-04-10T22:36:53.252639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 89eeab852c889a12 elected leader 89eeab852c889a12 at term 3"}
	{"level":"info","ts":"2024-04-10T22:36:53.259907Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"89eeab852c889a12","local-member-attributes":"{Name:pause-262675 ClientURLs:[https://192.168.50.144:2379]}","request-path":"/0/members/89eeab852c889a12/attributes","cluster-id":"cd276b60e5eb7d71","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-10T22:36:53.260103Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-10T22:36:53.260211Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-10T22:36:53.260726Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-10T22:36:53.260798Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-10T22:36:53.262292Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.144:2379"}
	{"level":"info","ts":"2024-04-10T22:36:53.262486Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 22:37:15 up 2 min,  0 users,  load average: 1.49, 0.46, 0.16
	Linux pause-262675 5.10.207 #1 SMP Wed Apr 10 14:57:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0d5d2920fb5e11e0bbd6eb4650d80e25e82853103b66f052f118117af38bb641] <==
	
	
	==> kube-apiserver [10329215440e7807f796ae1783552d6b7498727ff904355fafaa66a8e1c74966] <==
	I0410 22:36:54.625922       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0410 22:36:54.649676       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0410 22:36:54.649709       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0410 22:36:54.707767       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0410 22:36:54.709223       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0410 22:36:54.712081       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0410 22:36:54.732078       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0410 22:36:54.732118       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0410 22:36:54.732208       1 shared_informer.go:318] Caches are synced for configmaps
	I0410 22:36:54.746110       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0410 22:36:54.749837       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0410 22:36:54.752111       1 aggregator.go:165] initial CRD sync complete...
	I0410 22:36:54.752150       1 autoregister_controller.go:141] Starting autoregister controller
	I0410 22:36:54.752158       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0410 22:36:54.752163       1 cache.go:39] Caches are synced for autoregister controller
	I0410 22:36:54.773523       1 shared_informer.go:318] Caches are synced for node_authorizer
	E0410 22:36:54.781694       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0410 22:36:55.618917       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0410 22:36:56.430740       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0410 22:36:56.444890       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0410 22:36:56.505180       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0410 22:36:56.537478       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0410 22:36:56.544672       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0410 22:37:07.446315       1 controller.go:624] quota admission added evaluator for: endpoints
	I0410 22:37:07.543556       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [45b4c8ba68eda49b3fc10e4b005785a90c5375c5d84d4f3c8c3290cffdc9b02f] <==
	I0410 22:37:07.419615       1 shared_informer.go:318] Caches are synced for legacy-service-account-token-cleaner
	I0410 22:37:07.422262       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0410 22:37:07.422300       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0410 22:37:07.423039       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0410 22:37:07.423098       1 shared_informer.go:318] Caches are synced for GC
	I0410 22:37:07.427012       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0410 22:37:07.431405       1 shared_informer.go:318] Caches are synced for endpoint
	I0410 22:37:07.438456       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0410 22:37:07.442363       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0410 22:37:07.450982       1 shared_informer.go:318] Caches are synced for taint
	I0410 22:37:07.451170       1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone=""
	I0410 22:37:07.451405       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-262675"
	I0410 22:37:07.451580       1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0410 22:37:07.451689       1 event.go:376] "Event occurred" object="pause-262675" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-262675 event: Registered Node pause-262675 in Controller"
	I0410 22:37:07.471940       1 shared_informer.go:318] Caches are synced for disruption
	I0410 22:37:07.480312       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0410 22:37:07.501179       1 shared_informer.go:318] Caches are synced for resource quota
	I0410 22:37:07.523393       1 shared_informer.go:318] Caches are synced for resource quota
	I0410 22:37:07.529767       1 shared_informer.go:318] Caches are synced for PV protection
	I0410 22:37:07.534202       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0410 22:37:07.610592       1 shared_informer.go:318] Caches are synced for attach detach
	I0410 22:37:07.614842       1 shared_informer.go:318] Caches are synced for persistent volume
	I0410 22:37:07.965537       1 shared_informer.go:318] Caches are synced for garbage collector
	I0410 22:37:08.018301       1 shared_informer.go:318] Caches are synced for garbage collector
	I0410 22:37:08.018347       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	
	==> kube-controller-manager [7919dde648e62a3b6cfbaa50cf895abdaaef02d4fb09b084593d54b66a62c43c] <==
	
	
	==> kube-proxy [c404daec01843347ac717ed0e35a18a21e313ef844ff10fa1c6d555de5c1aa3d] <==
	I0410 22:36:56.080181       1 server_others.go:72] "Using iptables proxy"
	I0410 22:36:56.099632       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.50.144"]
	I0410 22:36:56.203306       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0410 22:36:56.203369       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0410 22:36:56.203390       1 server_others.go:168] "Using iptables Proxier"
	I0410 22:36:56.219364       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0410 22:36:56.223586       1 server.go:865] "Version info" version="v1.29.3"
	I0410 22:36:56.223625       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0410 22:36:56.224828       1 config.go:188] "Starting service config controller"
	I0410 22:36:56.224881       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0410 22:36:56.224917       1 config.go:97] "Starting endpoint slice config controller"
	I0410 22:36:56.224921       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0410 22:36:56.234668       1 config.go:315] "Starting node config controller"
	I0410 22:36:56.234699       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0410 22:36:56.325011       1 shared_informer.go:318] Caches are synced for service config
	I0410 22:36:56.325163       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0410 22:36:56.335362       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [d859f4900e3c62aaeb3e33febb019653b89218111578d09f1acc555d215905ee] <==
	I0410 22:36:46.564158       1 server_others.go:72] "Using iptables proxy"
	E0410 22:36:46.566982       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-262675\": dial tcp 192.168.50.144:8443: connect: connection refused"
	
	
	==> kube-scheduler [4ec4d9a0df94625ef5e7a4bd667bf67d19db4d6651500972577fa4eb5a3cb3ea] <==
	
	
	==> kube-scheduler [61456649ac3ae6a328789ca7aacd71dbf572c1e2af8b98dc1ebc2a7b6dc63fdd] <==
	I0410 22:36:52.164313       1 serving.go:380] Generated self-signed cert in-memory
	W0410 22:36:54.661172       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0410 22:36:54.661351       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0410 22:36:54.661388       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0410 22:36:54.661411       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0410 22:36:54.710423       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0410 22:36:54.710550       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0410 22:36:54.735689       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0410 22:36:54.738316       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0410 22:36:54.747160       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0410 22:36:54.747363       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0410 22:36:54.849327       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 10 22:36:50 pause-262675 kubelet[3357]: I0410 22:36:50.741636    3357 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d79f2270031ca8da755169edef48bb81-flexvolume-dir\") pod \"kube-controller-manager-pause-262675\" (UID: \"d79f2270031ca8da755169edef48bb81\") " pod="kube-system/kube-controller-manager-pause-262675"
	Apr 10 22:36:50 pause-262675 kubelet[3357]: I0410 22:36:50.833443    3357 kubelet_node_status.go:73] "Attempting to register node" node="pause-262675"
	Apr 10 22:36:50 pause-262675 kubelet[3357]: E0410 22:36:50.834435    3357 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.144:8443: connect: connection refused" node="pause-262675"
	Apr 10 22:36:50 pause-262675 kubelet[3357]: I0410 22:36:50.976785    3357 scope.go:117] "RemoveContainer" containerID="55b7d5868c6fde665f9fae9d132c63b6f5753ee0ea0360723096c2fd8273c5e1"
	Apr 10 22:36:50 pause-262675 kubelet[3357]: I0410 22:36:50.977989    3357 scope.go:117] "RemoveContainer" containerID="0d5d2920fb5e11e0bbd6eb4650d80e25e82853103b66f052f118117af38bb641"
	Apr 10 22:36:50 pause-262675 kubelet[3357]: I0410 22:36:50.979378    3357 scope.go:117] "RemoveContainer" containerID="7919dde648e62a3b6cfbaa50cf895abdaaef02d4fb09b084593d54b66a62c43c"
	Apr 10 22:36:50 pause-262675 kubelet[3357]: I0410 22:36:50.980606    3357 scope.go:117] "RemoveContainer" containerID="4ec4d9a0df94625ef5e7a4bd667bf67d19db4d6651500972577fa4eb5a3cb3ea"
	Apr 10 22:36:51 pause-262675 kubelet[3357]: E0410 22:36:51.136164    3357 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-262675?timeout=10s\": dial tcp 192.168.50.144:8443: connect: connection refused" interval="800ms"
	Apr 10 22:36:51 pause-262675 kubelet[3357]: I0410 22:36:51.236763    3357 kubelet_node_status.go:73] "Attempting to register node" node="pause-262675"
	Apr 10 22:36:51 pause-262675 kubelet[3357]: E0410 22:36:51.237928    3357 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.144:8443: connect: connection refused" node="pause-262675"
	Apr 10 22:36:51 pause-262675 kubelet[3357]: W0410 22:36:51.383552    3357 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-262675&limit=500&resourceVersion=0": dial tcp 192.168.50.144:8443: connect: connection refused
	Apr 10 22:36:51 pause-262675 kubelet[3357]: E0410 22:36:51.383626    3357 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-262675&limit=500&resourceVersion=0": dial tcp 192.168.50.144:8443: connect: connection refused
	Apr 10 22:36:52 pause-262675 kubelet[3357]: I0410 22:36:52.039994    3357 kubelet_node_status.go:73] "Attempting to register node" node="pause-262675"
	Apr 10 22:36:54 pause-262675 kubelet[3357]: I0410 22:36:54.787960    3357 kubelet_node_status.go:112] "Node was previously registered" node="pause-262675"
	Apr 10 22:36:54 pause-262675 kubelet[3357]: I0410 22:36:54.788072    3357 kubelet_node_status.go:76] "Successfully registered node" node="pause-262675"
	Apr 10 22:36:54 pause-262675 kubelet[3357]: I0410 22:36:54.790221    3357 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 10 22:36:54 pause-262675 kubelet[3357]: I0410 22:36:54.791337    3357 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 10 22:36:55 pause-262675 kubelet[3357]: E0410 22:36:55.435393    3357 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-pause-262675\" already exists" pod="kube-system/kube-controller-manager-pause-262675"
	Apr 10 22:36:55 pause-262675 kubelet[3357]: I0410 22:36:55.493424    3357 apiserver.go:52] "Watching apiserver"
	Apr 10 22:36:55 pause-262675 kubelet[3357]: I0410 22:36:55.496931    3357 topology_manager.go:215] "Topology Admit Handler" podUID="0e7d0245-1820-426e-8a54-a1df3db2c2a4" podNamespace="kube-system" podName="kube-proxy-5rmsk"
	Apr 10 22:36:55 pause-262675 kubelet[3357]: I0410 22:36:55.497110    3357 topology_manager.go:215] "Topology Admit Handler" podUID="34376a83-ecec-4874-8fa9-653b3ba7a8fb" podNamespace="kube-system" podName="coredns-76f75df574-ngdgs"
	Apr 10 22:36:55 pause-262675 kubelet[3357]: I0410 22:36:55.524666    3357 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Apr 10 22:36:55 pause-262675 kubelet[3357]: I0410 22:36:55.577463    3357 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e7d0245-1820-426e-8a54-a1df3db2c2a4-xtables-lock\") pod \"kube-proxy-5rmsk\" (UID: \"0e7d0245-1820-426e-8a54-a1df3db2c2a4\") " pod="kube-system/kube-proxy-5rmsk"
	Apr 10 22:36:55 pause-262675 kubelet[3357]: I0410 22:36:55.577660    3357 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e7d0245-1820-426e-8a54-a1df3db2c2a4-lib-modules\") pod \"kube-proxy-5rmsk\" (UID: \"0e7d0245-1820-426e-8a54-a1df3db2c2a4\") " pod="kube-system/kube-proxy-5rmsk"
	Apr 10 22:36:55 pause-262675 kubelet[3357]: I0410 22:36:55.798867    3357 scope.go:117] "RemoveContainer" containerID="d859f4900e3c62aaeb3e33febb019653b89218111578d09f1acc555d215905ee"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-262675 -n pause-262675
helpers_test.go:261: (dbg) Run:  kubectl --context pause-262675 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-262675 -n pause-262675
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-262675 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-262675 logs -n 25: (1.408502406s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| Command |                Args                |          Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p cilium-688825 sudo find         | cilium-688825             | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:32 UTC |                     |
	|         | /etc/crio -type f -exec sh -c      |                           |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;               |                           |         |                |                     |                     |
	| ssh     | -p cilium-688825 sudo crio         | cilium-688825             | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:32 UTC |                     |
	|         | config                             |                           |         |                |                     |                     |
	| delete  | -p cilium-688825                   | cilium-688825             | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:32 UTC | 10 Apr 24 22:32 UTC |
	| start   | -p cert-expiration-464519          | cert-expiration-464519    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:32 UTC | 10 Apr 24 22:34 UTC |
	|         | --memory=2048                      |                           |         |                |                     |                     |
	|         | --cert-expiration=3m               |                           |         |                |                     |                     |
	|         | --driver=kvm2                      |                           |         |                |                     |                     |
	|         | --container-runtime=crio           |                           |         |                |                     |                     |
	| start   | -p NoKubernetes-857710             | NoKubernetes-857710       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:33 UTC | 10 Apr 24 22:33 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |                |                     |                     |
	|         | --container-runtime=crio           |                           |         |                |                     |                     |
	| delete  | -p offline-crio-874231             | offline-crio-874231       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:33 UTC | 10 Apr 24 22:33 UTC |
	| start   | -p force-systemd-flag-738205       | force-systemd-flag-738205 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:33 UTC | 10 Apr 24 22:34 UTC |
	|         | --memory=2048 --force-systemd      |                           |         |                |                     |                     |
	|         | --alsologtostderr                  |                           |         |                |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |                |                     |                     |
	|         | --container-runtime=crio           |                           |         |                |                     |                     |
	| start   | -p running-upgrade-869202          | running-upgrade-869202    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:33 UTC | 10 Apr 24 22:35 UTC |
	|         | --memory=2200                      |                           |         |                |                     |                     |
	|         | --alsologtostderr                  |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |                |                     |                     |
	|         | --container-runtime=crio           |                           |         |                |                     |                     |
	| delete  | -p NoKubernetes-857710             | NoKubernetes-857710       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:33 UTC | 10 Apr 24 22:33 UTC |
	| start   | -p NoKubernetes-857710             | NoKubernetes-857710       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:33 UTC | 10 Apr 24 22:34 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |                |                     |                     |
	|         | --container-runtime=crio           |                           |         |                |                     |                     |
	| ssh     | force-systemd-flag-738205 ssh cat  | force-systemd-flag-738205 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:34 UTC | 10 Apr 24 22:34 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf |                           |         |                |                     |                     |
	| delete  | -p force-systemd-flag-738205       | force-systemd-flag-738205 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:34 UTC | 10 Apr 24 22:34 UTC |
	| start   | -p pause-262675 --memory=2048      | pause-262675              | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:34 UTC | 10 Apr 24 22:36 UTC |
	|         | --install-addons=false             |                           |         |                |                     |                     |
	|         | --wait=all --driver=kvm2           |                           |         |                |                     |                     |
	|         | --container-runtime=crio           |                           |         |                |                     |                     |
	| ssh     | -p NoKubernetes-857710 sudo        | NoKubernetes-857710       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:34 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |                |                     |                     |
	|         | service kubelet                    |                           |         |                |                     |                     |
	| stop    | -p NoKubernetes-857710             | NoKubernetes-857710       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:35 UTC | 10 Apr 24 22:35 UTC |
	| start   | -p NoKubernetes-857710             | NoKubernetes-857710       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:35 UTC | 10 Apr 24 22:35 UTC |
	|         | --driver=kvm2                      |                           |         |                |                     |                     |
	|         | --container-runtime=crio           |                           |         |                |                     |                     |
	| delete  | -p running-upgrade-869202          | running-upgrade-869202    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:35 UTC | 10 Apr 24 22:35 UTC |
	| start   | -p kubernetes-upgrade-407031       | kubernetes-upgrade-407031 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:35 UTC |                     |
	|         | --memory=2200                      |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0       |                           |         |                |                     |                     |
	|         | --alsologtostderr                  |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |                |                     |                     |
	|         | --container-runtime=crio           |                           |         |                |                     |                     |
	| ssh     | -p NoKubernetes-857710 sudo        | NoKubernetes-857710       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:35 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |                |                     |                     |
	|         | service kubelet                    |                           |         |                |                     |                     |
	| delete  | -p NoKubernetes-857710             | NoKubernetes-857710       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:35 UTC | 10 Apr 24 22:35 UTC |
	| start   | -p stopped-upgrade-546741          | minikube                  | jenkins | v1.26.0        | 10 Apr 24 22:35 UTC | 10 Apr 24 22:37 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |                |                     |                     |
	|         |  --container-runtime=crio          |                           |         |                |                     |                     |
	| start   | -p pause-262675                    | pause-262675              | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:36 UTC | 10 Apr 24 22:37 UTC |
	|         | --alsologtostderr                  |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |                |                     |                     |
	|         | --container-runtime=crio           |                           |         |                |                     |                     |
	| start   | -p cert-expiration-464519          | cert-expiration-464519    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:37 UTC |                     |
	|         | --memory=2048                      |                           |         |                |                     |                     |
	|         | --cert-expiration=8760h            |                           |         |                |                     |                     |
	|         | --driver=kvm2                      |                           |         |                |                     |                     |
	|         | --container-runtime=crio           |                           |         |                |                     |                     |
	| stop    | stopped-upgrade-546741 stop        | minikube                  | jenkins | v1.26.0        | 10 Apr 24 22:37 UTC | 10 Apr 24 22:37 UTC |
	| start   | -p stopped-upgrade-546741          | stopped-upgrade-546741    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:37 UTC |                     |
	|         | --memory=2200                      |                           |         |                |                     |                     |
	|         | --alsologtostderr                  |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |                |                     |                     |
	|         | --container-runtime=crio           |                           |         |                |                     |                     |
	|---------|------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/10 22:37:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0410 22:37:05.424187   53176 out.go:291] Setting OutFile to fd 1 ...
	I0410 22:37:05.424479   53176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:37:05.424490   53176 out.go:304] Setting ErrFile to fd 2...
	I0410 22:37:05.424494   53176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:37:05.425110   53176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 22:37:05.426266   53176 out.go:298] Setting JSON to false
	I0410 22:37:05.427325   53176 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4768,"bootTime":1712783858,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0410 22:37:05.427402   53176 start.go:139] virtualization: kvm guest
	I0410 22:37:05.429326   53176 out.go:177] * [stopped-upgrade-546741] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0410 22:37:05.430940   53176 notify.go:220] Checking for updates...
	I0410 22:37:05.430962   53176 out.go:177]   - MINIKUBE_LOCATION=18610
	I0410 22:37:05.432391   53176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0410 22:37:05.433815   53176 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:37:05.435217   53176 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 22:37:05.436656   53176 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0410 22:37:05.438334   53176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0410 22:37:05.440087   53176 config.go:182] Loaded profile config "stopped-upgrade-546741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0410 22:37:05.440601   53176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:37:05.440652   53176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:37:05.456288   53176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40899
	I0410 22:37:05.456777   53176 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:37:05.457392   53176 main.go:141] libmachine: Using API Version  1
	I0410 22:37:05.457414   53176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:37:05.457784   53176 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:37:05.458025   53176 main.go:141] libmachine: (stopped-upgrade-546741) Calling .DriverName
	I0410 22:37:05.460094   53176 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0410 22:37:05.461611   53176 driver.go:392] Setting default libvirt URI to qemu:///system
	I0410 22:37:05.461971   53176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:37:05.462020   53176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:37:05.477114   53176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36895
	I0410 22:37:05.477582   53176 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:37:05.478048   53176 main.go:141] libmachine: Using API Version  1
	I0410 22:37:05.478064   53176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:37:05.478364   53176 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:37:05.478604   53176 main.go:141] libmachine: (stopped-upgrade-546741) Calling .DriverName
	I0410 22:37:05.514360   53176 out.go:177] * Using the kvm2 driver based on existing profile
	I0410 22:37:05.515629   53176 start.go:297] selected driver: kvm2
	I0410 22:37:05.515643   53176 start.go:901] validating driver "kvm2" against &{Name:stopped-upgrade-546741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-546
741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.228 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0410 22:37:05.515776   53176 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0410 22:37:05.516787   53176 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 22:37:05.516879   53176 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18610-5679/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0410 22:37:05.531621   53176 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0410 22:37:05.531968   53176 cni.go:84] Creating CNI manager for ""
	I0410 22:37:05.531983   53176 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:37:05.532025   53176 start.go:340] cluster config:
	{Name:stopped-upgrade-546741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-546741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.228 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0410 22:37:05.532129   53176 iso.go:125] acquiring lock: {Name:mk5a268a8a4dbf02629446a098c1de60574dce10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 22:37:05.533944   53176 out.go:177] * Starting "stopped-upgrade-546741" primary control-plane node in "stopped-upgrade-546741" cluster
	I0410 22:37:03.429103   53086 machine.go:94] provisionDockerMachine start ...
	I0410 22:37:03.429116   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .DriverName
	I0410 22:37:03.429355   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHHostname
	I0410 22:37:03.432054   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:03.432689   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:41:37", ip: ""} in network mk-cert-expiration-464519: {Iface:virbr4 ExpiryTime:2024-04-10 23:33:30 +0000 UTC Type:0 Mac:52:54:00:50:41:37 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:cert-expiration-464519 Clientid:01:52:54:00:50:41:37}
	I0410 22:37:03.432705   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined IP address 192.168.72.34 and MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:03.432883   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHPort
	I0410 22:37:03.433074   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHKeyPath
	I0410 22:37:03.433250   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHKeyPath
	I0410 22:37:03.433359   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHUsername
	I0410 22:37:03.433567   53086 main.go:141] libmachine: Using SSH client type: native
	I0410 22:37:03.433747   53086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.34 22 <nil> <nil>}
	I0410 22:37:03.433752   53086 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 22:37:03.558073   53086 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-464519
	
	I0410 22:37:03.558090   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetMachineName
	I0410 22:37:03.558366   53086 buildroot.go:166] provisioning hostname "cert-expiration-464519"
	I0410 22:37:03.558409   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetMachineName
	I0410 22:37:03.558610   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHHostname
	I0410 22:37:03.561669   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:03.562130   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:41:37", ip: ""} in network mk-cert-expiration-464519: {Iface:virbr4 ExpiryTime:2024-04-10 23:33:30 +0000 UTC Type:0 Mac:52:54:00:50:41:37 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:cert-expiration-464519 Clientid:01:52:54:00:50:41:37}
	I0410 22:37:03.562164   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined IP address 192.168.72.34 and MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:03.562348   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHPort
	I0410 22:37:03.562528   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHKeyPath
	I0410 22:37:03.562712   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHKeyPath
	I0410 22:37:03.562856   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHUsername
	I0410 22:37:03.562989   53086 main.go:141] libmachine: Using SSH client type: native
	I0410 22:37:03.563133   53086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.34 22 <nil> <nil>}
	I0410 22:37:03.563139   53086 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-464519 && echo "cert-expiration-464519" | sudo tee /etc/hostname
	I0410 22:37:03.711018   53086 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-464519
	
	I0410 22:37:03.711051   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHHostname
	I0410 22:37:03.714315   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:03.714673   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:41:37", ip: ""} in network mk-cert-expiration-464519: {Iface:virbr4 ExpiryTime:2024-04-10 23:33:30 +0000 UTC Type:0 Mac:52:54:00:50:41:37 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:cert-expiration-464519 Clientid:01:52:54:00:50:41:37}
	I0410 22:37:03.714701   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined IP address 192.168.72.34 and MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:03.714872   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHPort
	I0410 22:37:03.715079   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHKeyPath
	I0410 22:37:03.715285   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHKeyPath
	I0410 22:37:03.715455   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHUsername
	I0410 22:37:03.715628   53086 main.go:141] libmachine: Using SSH client type: native
	I0410 22:37:03.715867   53086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.34 22 <nil> <nil>}
	I0410 22:37:03.715885   53086 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-464519' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-464519/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-464519' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:37:03.843537   53086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:37:03.843578   53086 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:37:03.843600   53086 buildroot.go:174] setting up certificates
	I0410 22:37:03.843610   53086 provision.go:84] configureAuth start
	I0410 22:37:03.843622   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetMachineName
	I0410 22:37:03.843948   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetIP
	I0410 22:37:03.847123   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:03.847525   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:41:37", ip: ""} in network mk-cert-expiration-464519: {Iface:virbr4 ExpiryTime:2024-04-10 23:33:30 +0000 UTC Type:0 Mac:52:54:00:50:41:37 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:cert-expiration-464519 Clientid:01:52:54:00:50:41:37}
	I0410 22:37:03.847546   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined IP address 192.168.72.34 and MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:03.847742   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHHostname
	I0410 22:37:03.850271   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:03.850658   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:41:37", ip: ""} in network mk-cert-expiration-464519: {Iface:virbr4 ExpiryTime:2024-04-10 23:33:30 +0000 UTC Type:0 Mac:52:54:00:50:41:37 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:cert-expiration-464519 Clientid:01:52:54:00:50:41:37}
	I0410 22:37:03.850677   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined IP address 192.168.72.34 and MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:03.850860   53086 provision.go:143] copyHostCerts
	I0410 22:37:03.850923   53086 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:37:03.850941   53086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:37:03.851029   53086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:37:03.851168   53086 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:37:03.851174   53086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:37:03.851220   53086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:37:03.851324   53086 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:37:03.851330   53086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:37:03.851365   53086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:37:03.851452   53086 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-464519 san=[127.0.0.1 192.168.72.34 cert-expiration-464519 localhost minikube]
	I0410 22:37:04.028839   53086 provision.go:177] copyRemoteCerts
	I0410 22:37:04.028889   53086 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:37:04.028913   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHHostname
	I0410 22:37:04.032492   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:04.032921   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:41:37", ip: ""} in network mk-cert-expiration-464519: {Iface:virbr4 ExpiryTime:2024-04-10 23:33:30 +0000 UTC Type:0 Mac:52:54:00:50:41:37 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:cert-expiration-464519 Clientid:01:52:54:00:50:41:37}
	I0410 22:37:04.032947   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined IP address 192.168.72.34 and MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:04.033141   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHPort
	I0410 22:37:04.033343   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHKeyPath
	I0410 22:37:04.033529   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHUsername
	I0410 22:37:04.033685   53086 sshutil.go:53] new ssh client: &{IP:192.168.72.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/cert-expiration-464519/id_rsa Username:docker}
	I0410 22:37:04.131227   53086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:37:04.166019   53086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0410 22:37:04.195031   53086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0410 22:37:04.227661   53086 provision.go:87] duration metric: took 384.041729ms to configureAuth
	I0410 22:37:04.227679   53086 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:37:04.227869   53086 config.go:182] Loaded profile config "cert-expiration-464519": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:37:04.227966   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHHostname
	I0410 22:37:04.231043   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:04.231456   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:41:37", ip: ""} in network mk-cert-expiration-464519: {Iface:virbr4 ExpiryTime:2024-04-10 23:33:30 +0000 UTC Type:0 Mac:52:54:00:50:41:37 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:cert-expiration-464519 Clientid:01:52:54:00:50:41:37}
	I0410 22:37:04.231479   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined IP address 192.168.72.34 and MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:04.231691   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHPort
	I0410 22:37:04.231896   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHKeyPath
	I0410 22:37:04.232065   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHKeyPath
	I0410 22:37:04.232253   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHUsername
	I0410 22:37:04.232444   53086 main.go:141] libmachine: Using SSH client type: native
	I0410 22:37:04.232651   53086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.34 22 <nil> <nil>}
	I0410 22:37:04.232664   53086 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:37:06.090144   52815 pod_ready.go:102] pod "etcd-pause-262675" in "kube-system" namespace has status "Ready":"False"
	I0410 22:37:08.090861   52815 pod_ready.go:102] pod "etcd-pause-262675" in "kube-system" namespace has status "Ready":"False"
	I0410 22:37:10.090373   52815 pod_ready.go:92] pod "etcd-pause-262675" in "kube-system" namespace has status "Ready":"True"
	I0410 22:37:10.090394   52815 pod_ready.go:81] duration metric: took 13.006928664s for pod "etcd-pause-262675" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:10.090404   52815 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-262675" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:10.095592   52815 pod_ready.go:92] pod "kube-apiserver-pause-262675" in "kube-system" namespace has status "Ready":"True"
	I0410 22:37:10.095611   52815 pod_ready.go:81] duration metric: took 5.201297ms for pod "kube-apiserver-pause-262675" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:10.095623   52815 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-262675" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:10.100442   52815 pod_ready.go:92] pod "kube-controller-manager-pause-262675" in "kube-system" namespace has status "Ready":"True"
	I0410 22:37:10.100460   52815 pod_ready.go:81] duration metric: took 4.831274ms for pod "kube-controller-manager-pause-262675" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:10.100469   52815 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5rmsk" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:10.105282   52815 pod_ready.go:92] pod "kube-proxy-5rmsk" in "kube-system" namespace has status "Ready":"True"
	I0410 22:37:10.105298   52815 pod_ready.go:81] duration metric: took 4.823295ms for pod "kube-proxy-5rmsk" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:10.105306   52815 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-262675" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:10.109779   52815 pod_ready.go:92] pod "kube-scheduler-pause-262675" in "kube-system" namespace has status "Ready":"True"
	I0410 22:37:10.109795   52815 pod_ready.go:81] duration metric: took 4.484609ms for pod "kube-scheduler-pause-262675" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:10.109802   52815 pod_ready.go:38] duration metric: took 13.540270367s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:37:10.109817   52815 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0410 22:37:10.122311   52815 ops.go:34] apiserver oom_adj: -16
	I0410 22:37:10.122334   52815 kubeadm.go:591] duration metric: took 23.469633278s to restartPrimaryControlPlane
	I0410 22:37:10.122343   52815 kubeadm.go:393] duration metric: took 23.594788555s to StartCluster
	I0410 22:37:10.122362   52815 settings.go:142] acquiring lock: {Name:mk5dc8e9a07a91433645b19ffba859d70c73be71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:37:10.122441   52815 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:37:10.123372   52815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/kubeconfig: {Name:mkd6f498bec10d7b0d2291f83f5e27766227bc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:37:10.123581   52815 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.144 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0410 22:37:10.125277   52815 out.go:177] * Verifying Kubernetes components...
	I0410 22:37:10.123668   52815 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0410 22:37:10.123866   52815 config.go:182] Loaded profile config "pause-262675": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:37:10.126528   52815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:37:10.128054   52815 out.go:177] * Enabled addons: 
	I0410 22:37:05.535377   53176 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0410 22:37:05.535426   53176 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0410 22:37:05.535438   53176 cache.go:56] Caching tarball of preloaded images
	I0410 22:37:05.535512   53176 preload.go:173] Found /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0410 22:37:05.535523   53176 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I0410 22:37:05.535612   53176 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/stopped-upgrade-546741/config.json ...
	I0410 22:37:05.535794   53176 start.go:360] acquireMachinesLock for stopped-upgrade-546741: {Name:mkcfcfa8b01b63726303cd37d33f01d452118635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0410 22:37:10.138262   53176 start.go:364] duration metric: took 4.602424933s to acquireMachinesLock for "stopped-upgrade-546741"
	I0410 22:37:10.138330   53176 start.go:96] Skipping create...Using existing machine configuration
	I0410 22:37:10.138389   53176 fix.go:54] fixHost starting: 
	I0410 22:37:10.138829   53176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:37:10.138873   53176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:37:10.155935   53176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38727
	I0410 22:37:10.156392   53176 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:37:10.156921   53176 main.go:141] libmachine: Using API Version  1
	I0410 22:37:10.156955   53176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:37:10.157300   53176 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:37:10.157488   53176 main.go:141] libmachine: (stopped-upgrade-546741) Calling .DriverName
	I0410 22:37:10.157657   53176 main.go:141] libmachine: (stopped-upgrade-546741) Calling .GetState
	I0410 22:37:10.159019   53176 fix.go:112] recreateIfNeeded on stopped-upgrade-546741: state=Stopped err=<nil>
	I0410 22:37:10.159053   53176 main.go:141] libmachine: (stopped-upgrade-546741) Calling .DriverName
	W0410 22:37:10.159216   53176 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 22:37:10.161030   53176 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-546741" ...
	I0410 22:37:10.129410   52815 addons.go:505] duration metric: took 5.753328ms for enable addons: enabled=[]
	I0410 22:37:10.162324   53176 main.go:141] libmachine: (stopped-upgrade-546741) Calling .Start
	I0410 22:37:10.162491   53176 main.go:141] libmachine: (stopped-upgrade-546741) Ensuring networks are active...
	I0410 22:37:10.163325   53176 main.go:141] libmachine: (stopped-upgrade-546741) Ensuring network default is active
	I0410 22:37:10.163729   53176 main.go:141] libmachine: (stopped-upgrade-546741) Ensuring network mk-stopped-upgrade-546741 is active
	I0410 22:37:10.164211   53176 main.go:141] libmachine: (stopped-upgrade-546741) Getting domain xml...
	I0410 22:37:10.164972   53176 main.go:141] libmachine: (stopped-upgrade-546741) Creating domain...
	I0410 22:37:09.874591   53086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:37:09.874607   53086 machine.go:97] duration metric: took 6.445496488s to provisionDockerMachine
	I0410 22:37:09.874618   53086 start.go:293] postStartSetup for "cert-expiration-464519" (driver="kvm2")
	I0410 22:37:09.874631   53086 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:37:09.874661   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .DriverName
	I0410 22:37:09.875088   53086 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:37:09.875115   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHHostname
	I0410 22:37:09.878002   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:09.878430   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:41:37", ip: ""} in network mk-cert-expiration-464519: {Iface:virbr4 ExpiryTime:2024-04-10 23:33:30 +0000 UTC Type:0 Mac:52:54:00:50:41:37 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:cert-expiration-464519 Clientid:01:52:54:00:50:41:37}
	I0410 22:37:09.878450   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined IP address 192.168.72.34 and MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:09.878683   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHPort
	I0410 22:37:09.878901   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHKeyPath
	I0410 22:37:09.879077   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHUsername
	I0410 22:37:09.879277   53086 sshutil.go:53] new ssh client: &{IP:192.168.72.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/cert-expiration-464519/id_rsa Username:docker}
	I0410 22:37:09.971846   53086 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:37:09.976830   53086 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:37:09.976844   53086 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:37:09.976902   53086 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:37:09.976975   53086 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:37:09.977052   53086 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:37:09.988123   53086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:37:10.015700   53086 start.go:296] duration metric: took 141.070008ms for postStartSetup
	I0410 22:37:10.015727   53086 fix.go:56] duration metric: took 6.610185428s for fixHost
	I0410 22:37:10.015743   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHHostname
	I0410 22:37:10.018443   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:10.018753   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:41:37", ip: ""} in network mk-cert-expiration-464519: {Iface:virbr4 ExpiryTime:2024-04-10 23:33:30 +0000 UTC Type:0 Mac:52:54:00:50:41:37 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:cert-expiration-464519 Clientid:01:52:54:00:50:41:37}
	I0410 22:37:10.018771   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined IP address 192.168.72.34 and MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:10.018917   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHPort
	I0410 22:37:10.019117   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHKeyPath
	I0410 22:37:10.019315   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHKeyPath
	I0410 22:37:10.019499   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHUsername
	I0410 22:37:10.019652   53086 main.go:141] libmachine: Using SSH client type: native
	I0410 22:37:10.019832   53086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.34 22 <nil> <nil>}
	I0410 22:37:10.019837   53086 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 22:37:10.138119   53086 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712788630.128092378
	
	I0410 22:37:10.138132   53086 fix.go:216] guest clock: 1712788630.128092378
	I0410 22:37:10.138138   53086 fix.go:229] Guest: 2024-04-10 22:37:10.128092378 +0000 UTC Remote: 2024-04-10 22:37:10.015728755 +0000 UTC m=+6.779050279 (delta=112.363623ms)
	I0410 22:37:10.138153   53086 fix.go:200] guest clock delta is within tolerance: 112.363623ms
	I0410 22:37:10.138157   53086 start.go:83] releasing machines lock for "cert-expiration-464519", held for 6.732623629s
	I0410 22:37:10.138188   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .DriverName
	I0410 22:37:10.138439   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetIP
	I0410 22:37:10.141588   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:10.141907   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:41:37", ip: ""} in network mk-cert-expiration-464519: {Iface:virbr4 ExpiryTime:2024-04-10 23:33:30 +0000 UTC Type:0 Mac:52:54:00:50:41:37 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:cert-expiration-464519 Clientid:01:52:54:00:50:41:37}
	I0410 22:37:10.141933   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined IP address 192.168.72.34 and MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:10.142089   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .DriverName
	I0410 22:37:10.142801   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .DriverName
	I0410 22:37:10.143028   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .DriverName
	I0410 22:37:10.143123   53086 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:37:10.143171   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHHostname
	I0410 22:37:10.143264   53086 ssh_runner.go:195] Run: cat /version.json
	I0410 22:37:10.143282   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHHostname
	I0410 22:37:10.145969   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:10.146339   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:41:37", ip: ""} in network mk-cert-expiration-464519: {Iface:virbr4 ExpiryTime:2024-04-10 23:33:30 +0000 UTC Type:0 Mac:52:54:00:50:41:37 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:cert-expiration-464519 Clientid:01:52:54:00:50:41:37}
	I0410 22:37:10.146354   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined IP address 192.168.72.34 and MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:10.146373   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:10.146597   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHPort
	I0410 22:37:10.146766   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHKeyPath
	I0410 22:37:10.146828   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:41:37", ip: ""} in network mk-cert-expiration-464519: {Iface:virbr4 ExpiryTime:2024-04-10 23:33:30 +0000 UTC Type:0 Mac:52:54:00:50:41:37 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:cert-expiration-464519 Clientid:01:52:54:00:50:41:37}
	I0410 22:37:10.146850   53086 main.go:141] libmachine: (cert-expiration-464519) DBG | domain cert-expiration-464519 has defined IP address 192.168.72.34 and MAC address 52:54:00:50:41:37 in network mk-cert-expiration-464519
	I0410 22:37:10.146944   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHUsername
	I0410 22:37:10.147064   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHPort
	I0410 22:37:10.147115   53086 sshutil.go:53] new ssh client: &{IP:192.168.72.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/cert-expiration-464519/id_rsa Username:docker}
	I0410 22:37:10.147192   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHKeyPath
	I0410 22:37:10.147335   53086 main.go:141] libmachine: (cert-expiration-464519) Calling .GetSSHUsername
	I0410 22:37:10.147471   53086 sshutil.go:53] new ssh client: &{IP:192.168.72.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/cert-expiration-464519/id_rsa Username:docker}
	I0410 22:37:10.265807   53086 ssh_runner.go:195] Run: systemctl --version
	I0410 22:37:10.272441   53086 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:37:10.686228   53086 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 22:37:10.769481   53086 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:37:10.769529   53086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:37:10.803335   53086 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0410 22:37:10.803352   53086 start.go:494] detecting cgroup driver to use...
	I0410 22:37:10.803415   53086 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:37:10.902128   53086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:37:10.938896   53086 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:37:10.938955   53086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:37:10.991708   53086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:37:11.030920   53086 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:37:11.278880   53086 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:37:11.557635   53086 docker.go:233] disabling docker service ...
	I0410 22:37:11.557693   53086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:37:11.591738   53086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:37:11.623721   53086 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:37:11.839774   53086 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:37:12.058321   53086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:37:12.076097   53086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:37:12.102618   53086 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0410 22:37:12.102672   53086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:37:12.120038   53086 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:37:12.120084   53086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:37:12.137410   53086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:37:12.152861   53086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:37:12.166721   53086 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:37:12.179585   53086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:37:12.192124   53086 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:37:12.204980   53086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:37:12.217760   53086 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:37:12.230158   53086 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:37:12.241590   53086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:37:12.409258   53086 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:37:10.316461   52815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:37:10.340134   52815 node_ready.go:35] waiting up to 6m0s for node "pause-262675" to be "Ready" ...
	I0410 22:37:10.345173   52815 node_ready.go:49] node "pause-262675" has status "Ready":"True"
	I0410 22:37:10.345201   52815 node_ready.go:38] duration metric: took 5.035755ms for node "pause-262675" to be "Ready" ...
	I0410 22:37:10.345213   52815 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:37:10.492763   52815 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-ngdgs" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:10.889352   52815 pod_ready.go:92] pod "coredns-76f75df574-ngdgs" in "kube-system" namespace has status "Ready":"True"
	I0410 22:37:10.889384   52815 pod_ready.go:81] duration metric: took 396.589751ms for pod "coredns-76f75df574-ngdgs" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:10.889396   52815 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-262675" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:11.289819   52815 pod_ready.go:92] pod "etcd-pause-262675" in "kube-system" namespace has status "Ready":"True"
	I0410 22:37:11.289844   52815 pod_ready.go:81] duration metric: took 400.438701ms for pod "etcd-pause-262675" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:11.289856   52815 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-262675" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:11.689214   52815 pod_ready.go:92] pod "kube-apiserver-pause-262675" in "kube-system" namespace has status "Ready":"True"
	I0410 22:37:11.689246   52815 pod_ready.go:81] duration metric: took 399.381763ms for pod "kube-apiserver-pause-262675" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:11.689266   52815 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-262675" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:12.089017   52815 pod_ready.go:92] pod "kube-controller-manager-pause-262675" in "kube-system" namespace has status "Ready":"True"
	I0410 22:37:12.089041   52815 pod_ready.go:81] duration metric: took 399.764754ms for pod "kube-controller-manager-pause-262675" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:12.089054   52815 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5rmsk" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:12.489063   52815 pod_ready.go:92] pod "kube-proxy-5rmsk" in "kube-system" namespace has status "Ready":"True"
	I0410 22:37:12.489088   52815 pod_ready.go:81] duration metric: took 400.026407ms for pod "kube-proxy-5rmsk" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:12.489097   52815 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-262675" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:12.889765   52815 pod_ready.go:92] pod "kube-scheduler-pause-262675" in "kube-system" namespace has status "Ready":"True"
	I0410 22:37:12.889789   52815 pod_ready.go:81] duration metric: took 400.684751ms for pod "kube-scheduler-pause-262675" in "kube-system" namespace to be "Ready" ...
	I0410 22:37:12.889799   52815 pod_ready.go:38] duration metric: took 2.544574728s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:37:12.889815   52815 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:37:12.889871   52815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:37:12.906462   52815 api_server.go:72] duration metric: took 2.782852962s to wait for apiserver process to appear ...
	I0410 22:37:12.906499   52815 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:37:12.906523   52815 api_server.go:253] Checking apiserver healthz at https://192.168.50.144:8443/healthz ...
	I0410 22:37:12.912070   52815 api_server.go:279] https://192.168.50.144:8443/healthz returned 200:
	ok
	I0410 22:37:12.913283   52815 api_server.go:141] control plane version: v1.29.3
	I0410 22:37:12.913310   52815 api_server.go:131] duration metric: took 6.802595ms to wait for apiserver health ...
	I0410 22:37:12.913322   52815 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:37:13.089996   52815 system_pods.go:59] 6 kube-system pods found
	I0410 22:37:13.090021   52815 system_pods.go:61] "coredns-76f75df574-ngdgs" [34376a83-ecec-4874-8fa9-653b3ba7a8fb] Running
	I0410 22:37:13.090026   52815 system_pods.go:61] "etcd-pause-262675" [2539bb29-407d-49c6-be2b-4b462715f551] Running
	I0410 22:37:13.090030   52815 system_pods.go:61] "kube-apiserver-pause-262675" [b4e1eb76-b1c8-4f96-8158-8887bd29b7c5] Running
	I0410 22:37:13.090033   52815 system_pods.go:61] "kube-controller-manager-pause-262675" [2383c61b-ce86-41e0-afd7-2d17d5377563] Running
	I0410 22:37:13.090038   52815 system_pods.go:61] "kube-proxy-5rmsk" [0e7d0245-1820-426e-8a54-a1df3db2c2a4] Running
	I0410 22:37:13.090041   52815 system_pods.go:61] "kube-scheduler-pause-262675" [cfcc8a79-4a26-4a6b-95a5-0d026d28eec3] Running
	I0410 22:37:13.090047   52815 system_pods.go:74] duration metric: took 176.719207ms to wait for pod list to return data ...
	I0410 22:37:13.090053   52815 default_sa.go:34] waiting for default service account to be created ...
	I0410 22:37:13.289539   52815 default_sa.go:45] found service account: "default"
	I0410 22:37:13.289570   52815 default_sa.go:55] duration metric: took 199.510016ms for default service account to be created ...
	I0410 22:37:13.289582   52815 system_pods.go:116] waiting for k8s-apps to be running ...
	I0410 22:37:13.493836   52815 system_pods.go:86] 6 kube-system pods found
	I0410 22:37:13.493861   52815 system_pods.go:89] "coredns-76f75df574-ngdgs" [34376a83-ecec-4874-8fa9-653b3ba7a8fb] Running
	I0410 22:37:13.493866   52815 system_pods.go:89] "etcd-pause-262675" [2539bb29-407d-49c6-be2b-4b462715f551] Running
	I0410 22:37:13.493870   52815 system_pods.go:89] "kube-apiserver-pause-262675" [b4e1eb76-b1c8-4f96-8158-8887bd29b7c5] Running
	I0410 22:37:13.493874   52815 system_pods.go:89] "kube-controller-manager-pause-262675" [2383c61b-ce86-41e0-afd7-2d17d5377563] Running
	I0410 22:37:13.493878   52815 system_pods.go:89] "kube-proxy-5rmsk" [0e7d0245-1820-426e-8a54-a1df3db2c2a4] Running
	I0410 22:37:13.493882   52815 system_pods.go:89] "kube-scheduler-pause-262675" [cfcc8a79-4a26-4a6b-95a5-0d026d28eec3] Running
	I0410 22:37:13.493888   52815 system_pods.go:126] duration metric: took 204.299925ms to wait for k8s-apps to be running ...
	I0410 22:37:13.493896   52815 system_svc.go:44] waiting for kubelet service to be running ....
	I0410 22:37:13.493937   52815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:37:13.521435   52815 system_svc.go:56] duration metric: took 27.509543ms WaitForService to wait for kubelet
	I0410 22:37:13.521479   52815 kubeadm.go:576] duration metric: took 3.397871482s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 22:37:13.521512   52815 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:37:13.688311   52815 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:37:13.688344   52815 node_conditions.go:123] node cpu capacity is 2
	I0410 22:37:13.688361   52815 node_conditions.go:105] duration metric: took 166.839352ms to run NodePressure ...
	I0410 22:37:13.688374   52815 start.go:240] waiting for startup goroutines ...
	I0410 22:37:13.688385   52815 start.go:245] waiting for cluster config update ...
	I0410 22:37:13.688409   52815 start.go:254] writing updated cluster config ...
	I0410 22:37:13.688798   52815 ssh_runner.go:195] Run: rm -f paused
	I0410 22:37:13.738564   52815 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0410 22:37:13.740852   52815 out.go:177] * Done! kubectl is now configured to use "pause-262675" cluster and "default" namespace by default
	I0410 22:37:11.462662   53176 main.go:141] libmachine: (stopped-upgrade-546741) Waiting to get IP...
	I0410 22:37:11.463693   53176 main.go:141] libmachine: (stopped-upgrade-546741) DBG | domain stopped-upgrade-546741 has defined MAC address 52:54:00:ca:85:2b in network mk-stopped-upgrade-546741
	I0410 22:37:11.464200   53176 main.go:141] libmachine: (stopped-upgrade-546741) DBG | unable to find current IP address of domain stopped-upgrade-546741 in network mk-stopped-upgrade-546741
	I0410 22:37:11.464272   53176 main.go:141] libmachine: (stopped-upgrade-546741) DBG | I0410 22:37:11.464166   53227 retry.go:31] will retry after 305.250538ms: waiting for machine to come up
	I0410 22:37:11.770772   53176 main.go:141] libmachine: (stopped-upgrade-546741) DBG | domain stopped-upgrade-546741 has defined MAC address 52:54:00:ca:85:2b in network mk-stopped-upgrade-546741
	I0410 22:37:11.771275   53176 main.go:141] libmachine: (stopped-upgrade-546741) DBG | unable to find current IP address of domain stopped-upgrade-546741 in network mk-stopped-upgrade-546741
	I0410 22:37:11.771324   53176 main.go:141] libmachine: (stopped-upgrade-546741) DBG | I0410 22:37:11.771231   53227 retry.go:31] will retry after 341.092464ms: waiting for machine to come up
	I0410 22:37:12.113851   53176 main.go:141] libmachine: (stopped-upgrade-546741) DBG | domain stopped-upgrade-546741 has defined MAC address 52:54:00:ca:85:2b in network mk-stopped-upgrade-546741
	I0410 22:37:12.114336   53176 main.go:141] libmachine: (stopped-upgrade-546741) DBG | unable to find current IP address of domain stopped-upgrade-546741 in network mk-stopped-upgrade-546741
	I0410 22:37:12.114364   53176 main.go:141] libmachine: (stopped-upgrade-546741) DBG | I0410 22:37:12.114315   53227 retry.go:31] will retry after 293.013453ms: waiting for machine to come up
	I0410 22:37:12.408844   53176 main.go:141] libmachine: (stopped-upgrade-546741) DBG | domain stopped-upgrade-546741 has defined MAC address 52:54:00:ca:85:2b in network mk-stopped-upgrade-546741
	I0410 22:37:12.409467   53176 main.go:141] libmachine: (stopped-upgrade-546741) DBG | unable to find current IP address of domain stopped-upgrade-546741 in network mk-stopped-upgrade-546741
	I0410 22:37:12.409493   53176 main.go:141] libmachine: (stopped-upgrade-546741) DBG | I0410 22:37:12.409443   53227 retry.go:31] will retry after 476.79556ms: waiting for machine to come up
	I0410 22:37:12.887875   53176 main.go:141] libmachine: (stopped-upgrade-546741) DBG | domain stopped-upgrade-546741 has defined MAC address 52:54:00:ca:85:2b in network mk-stopped-upgrade-546741
	I0410 22:37:12.888381   53176 main.go:141] libmachine: (stopped-upgrade-546741) DBG | unable to find current IP address of domain stopped-upgrade-546741 in network mk-stopped-upgrade-546741
	I0410 22:37:12.888428   53176 main.go:141] libmachine: (stopped-upgrade-546741) DBG | I0410 22:37:12.888319   53227 retry.go:31] will retry after 487.352834ms: waiting for machine to come up
	I0410 22:37:13.377105   53176 main.go:141] libmachine: (stopped-upgrade-546741) DBG | domain stopped-upgrade-546741 has defined MAC address 52:54:00:ca:85:2b in network mk-stopped-upgrade-546741
	I0410 22:37:13.377624   53176 main.go:141] libmachine: (stopped-upgrade-546741) DBG | unable to find current IP address of domain stopped-upgrade-546741 in network mk-stopped-upgrade-546741
	I0410 22:37:13.377647   53176 main.go:141] libmachine: (stopped-upgrade-546741) DBG | I0410 22:37:13.377560   53227 retry.go:31] will retry after 880.065214ms: waiting for machine to come up
	I0410 22:37:14.258839   53176 main.go:141] libmachine: (stopped-upgrade-546741) DBG | domain stopped-upgrade-546741 has defined MAC address 52:54:00:ca:85:2b in network mk-stopped-upgrade-546741
	I0410 22:37:14.259416   53176 main.go:141] libmachine: (stopped-upgrade-546741) DBG | unable to find current IP address of domain stopped-upgrade-546741 in network mk-stopped-upgrade-546741
	I0410 22:37:14.259447   53176 main.go:141] libmachine: (stopped-upgrade-546741) DBG | I0410 22:37:14.259362   53227 retry.go:31] will retry after 1.034888739s: waiting for machine to come up
	I0410 22:37:15.296084   53176 main.go:141] libmachine: (stopped-upgrade-546741) DBG | domain stopped-upgrade-546741 has defined MAC address 52:54:00:ca:85:2b in network mk-stopped-upgrade-546741
	I0410 22:37:15.296562   53176 main.go:141] libmachine: (stopped-upgrade-546741) DBG | unable to find current IP address of domain stopped-upgrade-546741 in network mk-stopped-upgrade-546741
	I0410 22:37:15.296587   53176 main.go:141] libmachine: (stopped-upgrade-546741) DBG | I0410 22:37:15.296531   53227 retry.go:31] will retry after 1.222496054s: waiting for machine to come up
	
	
	==> CRI-O <==
	Apr 10 22:37:16 pause-262675 crio[2878]: time="2024-04-10 22:37:16.575822432Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:837cf686abcff65dc7b800b587b093a2081b1fa50952fae4b0c85a62348bdb83,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-ngdgs,Uid:34376a83-ecec-4874-8fa9-653b3ba7a8fb,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1712788615823141491,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-ngdgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34376a83-ecec-4874-8fa9-653b3ba7a8fb,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-10T22:36:55.496705217Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:555e291ef738bf0a255cfc5ab3adac30c356b538e791098ec1b55f853970875b,Metadata:&PodSandboxMetadata{Name:kube-proxy-5rmsk,Uid:0e7d0245-1820-426e-8a54-a1df3db2c2a4,Namespace:kube-system,Attempt
:2,},State:SANDBOX_READY,CreatedAt:1712788605860850409,Labels:map[string]string{controller-revision-hash: 7659797656,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-5rmsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e7d0245-1820-426e-8a54-a1df3db2c2a4,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-10T22:35:42.900948899Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:556d6b70f03b84d0911d9231a7d6c440b06c57077911a1d7635bb466753a8e61,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-262675,Uid:300254c7602c01a47d7c7b015d2c108b,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1712788605843998119,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300254c7602c01a47d7c7b015d2c108b,tier: control-plane,},Annotations:map[string
]string{kubernetes.io/config.hash: 300254c7602c01a47d7c7b015d2c108b,kubernetes.io/config.seen: 2024-04-10T22:35:30.222074129Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:429cbb872f578512fe53c1ece73ed78a320d7c1afd42cf4ef85d4fda4a80289a,Metadata:&PodSandboxMetadata{Name:etcd-pause-262675,Uid:abbca0d000d237fc40d9ef2ad258eb84,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1712788605792083571,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbca0d000d237fc40d9ef2ad258eb84,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.144:2379,kubernetes.io/config.hash: abbca0d000d237fc40d9ef2ad258eb84,kubernetes.io/config.seen: 2024-04-10T22:35:30.222075102Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f80b1894e84960553130cd0e87d1d81676afccccc9ce5aa3d55a571b17bd
d3bd,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-262675,Uid:1de360ec12402802bf8613b64a97ba7a,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1712788605784633558,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de360ec12402802bf8613b64a97ba7a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.144:8443,kubernetes.io/config.hash: 1de360ec12402802bf8613b64a97ba7a,kubernetes.io/config.seen: 2024-04-10T22:35:30.222068666Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8658846bdf398d3076e97d4f6b0a1407ea671749947a495e4d2870f126e9c8e1,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-262675,Uid:d79f2270031ca8da755169edef48bb81,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1712788605752083546,Labels:map[string]str
ing{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79f2270031ca8da755169edef48bb81,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d79f2270031ca8da755169edef48bb81,kubernetes.io/config.seen: 2024-04-10T22:35:30.222072900Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=493b0404-38a9-4272-beae-6ef6b77a4ee8 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 10 22:37:16 pause-262675 crio[2878]: time="2024-04-10 22:37:16.576597852Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c58fff4e-5f50-4961-a1c4-187142eaba44 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:37:16 pause-262675 crio[2878]: time="2024-04-10 22:37:16.576675149Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c58fff4e-5f50-4961-a1c4-187142eaba44 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:37:16 pause-262675 crio[2878]: time="2024-04-10 22:37:16.576817036Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fbebdd4cfc4eca7e3dc8fb95f76c48ebd519ec863c867415e510b74e750d3c38,PodSandboxId:837cf686abcff65dc7b800b587b093a2081b1fa50952fae4b0c85a62348bdb83,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712788616260958041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ngdgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34376a83-ecec-4874-8fa9-653b3ba7a8fb,},Annotations:map[string]string{io.kubernetes.container.hash: cfd390af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c404daec01843347ac717ed0e35a18a21e313ef844ff10fa1c6d555de5c1aa3d,PodSandboxId:555e291ef738bf0a255cfc5ab3adac30c356b538e791098ec1b55f853970875b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712788615811599269,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5rmsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 0e7d0245-1820-426e-8a54-a1df3db2c2a4,},Annotations:map[string]string{io.kubernetes.container.hash: 182b02b1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b1ca2a6bb13e3d051425ba5ffd9d607aff8b3e63d63b4f36c2b70492432712,PodSandboxId:429cbb872f578512fe53c1ece73ed78a320d7c1afd42cf4ef85d4fda4a80289a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712788611029673869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbca0d000d237fc40d9ef2ad258eb84,},Annot
ations:map[string]string{io.kubernetes.container.hash: 356c006c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45b4c8ba68eda49b3fc10e4b005785a90c5375c5d84d4f3c8c3290cffdc9b02f,PodSandboxId:8658846bdf398d3076e97d4f6b0a1407ea671749947a495e4d2870f126e9c8e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712788611014096175,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79f2270031ca8da755169edef48bb8
1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61456649ac3ae6a328789ca7aacd71dbf572c1e2af8b98dc1ebc2a7b6dc63fdd,PodSandboxId:556d6b70f03b84d0911d9231a7d6c440b06c57077911a1d7635bb466753a8e61,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712788611027940741,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300254c7602c01a47d7c7b015d2c108b,},Annotations:ma
p[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10329215440e7807f796ae1783552d6b7498727ff904355fafaa66a8e1c74966,PodSandboxId:f80b1894e84960553130cd0e87d1d81676afccccc9ce5aa3d55a571b17bdd3bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712788610998915008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de360ec12402802bf8613b64a97ba7a,},Annotations:map[string]string{io
.kubernetes.container.hash: 7b34b2b8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c58fff4e-5f50-4961-a1c4-187142eaba44 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:37:16 pause-262675 crio[2878]: time="2024-04-10 22:37:16.596463991Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bc83d5c8-f154-4e2e-ba89-6685df31fee0 name=/runtime.v1.RuntimeService/Version
	Apr 10 22:37:16 pause-262675 crio[2878]: time="2024-04-10 22:37:16.596563198Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bc83d5c8-f154-4e2e-ba89-6685df31fee0 name=/runtime.v1.RuntimeService/Version
	Apr 10 22:37:16 pause-262675 crio[2878]: time="2024-04-10 22:37:16.603013531Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e84d1a88-1d03-4792-8273-5cb078d2fd8a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:37:16 pause-262675 crio[2878]: time="2024-04-10 22:37:16.603491107Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712788636603462955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e84d1a88-1d03-4792-8273-5cb078d2fd8a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:37:16 pause-262675 crio[2878]: time="2024-04-10 22:37:16.606022601Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=992b42fe-37db-459f-8162-5c4c00b80403 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:37:16 pause-262675 crio[2878]: time="2024-04-10 22:37:16.606118875Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=992b42fe-37db-459f-8162-5c4c00b80403 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:37:16 pause-262675 crio[2878]: time="2024-04-10 22:37:16.606476081Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fbebdd4cfc4eca7e3dc8fb95f76c48ebd519ec863c867415e510b74e750d3c38,PodSandboxId:837cf686abcff65dc7b800b587b093a2081b1fa50952fae4b0c85a62348bdb83,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712788616260958041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ngdgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34376a83-ecec-4874-8fa9-653b3ba7a8fb,},Annotations:map[string]string{io.kubernetes.container.hash: cfd390af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c404daec01843347ac717ed0e35a18a21e313ef844ff10fa1c6d555de5c1aa3d,PodSandboxId:555e291ef738bf0a255cfc5ab3adac30c356b538e791098ec1b55f853970875b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712788615811599269,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5rmsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 0e7d0245-1820-426e-8a54-a1df3db2c2a4,},Annotations:map[string]string{io.kubernetes.container.hash: 182b02b1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b1ca2a6bb13e3d051425ba5ffd9d607aff8b3e63d63b4f36c2b70492432712,PodSandboxId:429cbb872f578512fe53c1ece73ed78a320d7c1afd42cf4ef85d4fda4a80289a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712788611029673869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbca0d000d237fc40d9ef2ad258eb84,},Annot
ations:map[string]string{io.kubernetes.container.hash: 356c006c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45b4c8ba68eda49b3fc10e4b005785a90c5375c5d84d4f3c8c3290cffdc9b02f,PodSandboxId:8658846bdf398d3076e97d4f6b0a1407ea671749947a495e4d2870f126e9c8e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712788611014096175,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79f2270031ca8da755169edef48bb8
1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61456649ac3ae6a328789ca7aacd71dbf572c1e2af8b98dc1ebc2a7b6dc63fdd,PodSandboxId:556d6b70f03b84d0911d9231a7d6c440b06c57077911a1d7635bb466753a8e61,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712788611027940741,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300254c7602c01a47d7c7b015d2c108b,},Annotations:ma
p[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10329215440e7807f796ae1783552d6b7498727ff904355fafaa66a8e1c74966,PodSandboxId:f80b1894e84960553130cd0e87d1d81676afccccc9ce5aa3d55a571b17bdd3bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712788610998915008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de360ec12402802bf8613b64a97ba7a,},Annotations:map[string]string{io
.kubernetes.container.hash: 7b34b2b8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d859f4900e3c62aaeb3e33febb019653b89218111578d09f1acc555d215905ee,PodSandboxId:555e291ef738bf0a255cfc5ab3adac30c356b538e791098ec1b55f853970875b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712788606208819446,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5rmsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e7d0245-1820-426e-8a54-a1df3db2c2a4,},Annotations:map[string]string{io.kubernetes.container.hash: 182b02
b1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55b7d5868c6fde665f9fae9d132c63b6f5753ee0ea0360723096c2fd8273c5e1,PodSandboxId:101f9e02523f5940b556e93bd5de9c016dae495e23cb2143cb675275d1a51054,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712788604241700297,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbca0d000d237fc40d9ef2ad258eb84,},Annotations:map[string]string{io.kubernetes.container.hash: 356c006c,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7919dde648e62a3b6cfbaa50cf895abdaaef02d4fb09b084593d54b66a62c43c,PodSandboxId:b19e372d535607bb8671b31b11dd0453a68ab8c49c31de40fc1e84a679f82352,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712788604115576044,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79f2270031ca8da755169edef48bb81,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d5d2920fb5e11e0bbd6eb4650d80e25e82853103b66f052f118117af38bb641,PodSandboxId:1e5e7041edf50f9ed552e15076c03d560d52bdcdc2d2c6275a6671e01ea44248,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712788604142152927,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de360ec12402802bf8613b64a97ba7a,},Annotations:map[string]string{io.kubernetes.container.hash: 7b34b2b8,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ec4d9a0df94625ef5e7a4bd667bf67d19db4d6651500972577fa4eb5a3cb3ea,PodSandboxId:271b003eed7c9398b993bdec7fb35adffe097b6c65a75a2ede7a454185d7b4c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712788604044386398,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300254c7602c01a47d7c7b015d2c108b,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae010ac08bc1411c206e818b06d0d211b7506c5fef244d38698f4920531d794,PodSandboxId:e5edb36519bc4a1aef8e68b54a0ac0c763748d9d23385619b19f20200fe963bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712788544036674650,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ngdgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34376a83-ecec-4874-8fa9-653b3ba7a8fb,},Annotations:map[string]string{io.kubernetes.container.hash: cfd390af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=992b42fe-37db-459f-8162-5c4c00b80403 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:37:16 pause-262675 crio[2878]: time="2024-04-10 22:37:16.657361250Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=04a16b0e-9de8-49a1-8c39-44d4f0e6ca75 name=/runtime.v1.RuntimeService/Version
	Apr 10 22:37:16 pause-262675 crio[2878]: time="2024-04-10 22:37:16.657458778Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=04a16b0e-9de8-49a1-8c39-44d4f0e6ca75 name=/runtime.v1.RuntimeService/Version
	Apr 10 22:37:16 pause-262675 crio[2878]: time="2024-04-10 22:37:16.658792303Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=af0afeed-ee9b-4891-8fe4-4ae85b00a201 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:37:16 pause-262675 crio[2878]: time="2024-04-10 22:37:16.659156148Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712788636659134439,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af0afeed-ee9b-4891-8fe4-4ae85b00a201 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:37:16 pause-262675 crio[2878]: time="2024-04-10 22:37:16.659900293Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=89e8a18d-cb74-4ea5-a166-0479e2dfaa9f name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:37:16 pause-262675 crio[2878]: time="2024-04-10 22:37:16.659971095Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=89e8a18d-cb74-4ea5-a166-0479e2dfaa9f name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:37:16 pause-262675 crio[2878]: time="2024-04-10 22:37:16.660227923Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fbebdd4cfc4eca7e3dc8fb95f76c48ebd519ec863c867415e510b74e750d3c38,PodSandboxId:837cf686abcff65dc7b800b587b093a2081b1fa50952fae4b0c85a62348bdb83,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712788616260958041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ngdgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34376a83-ecec-4874-8fa9-653b3ba7a8fb,},Annotations:map[string]string{io.kubernetes.container.hash: cfd390af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c404daec01843347ac717ed0e35a18a21e313ef844ff10fa1c6d555de5c1aa3d,PodSandboxId:555e291ef738bf0a255cfc5ab3adac30c356b538e791098ec1b55f853970875b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712788615811599269,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5rmsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 0e7d0245-1820-426e-8a54-a1df3db2c2a4,},Annotations:map[string]string{io.kubernetes.container.hash: 182b02b1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b1ca2a6bb13e3d051425ba5ffd9d607aff8b3e63d63b4f36c2b70492432712,PodSandboxId:429cbb872f578512fe53c1ece73ed78a320d7c1afd42cf4ef85d4fda4a80289a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712788611029673869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbca0d000d237fc40d9ef2ad258eb84,},Annot
ations:map[string]string{io.kubernetes.container.hash: 356c006c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45b4c8ba68eda49b3fc10e4b005785a90c5375c5d84d4f3c8c3290cffdc9b02f,PodSandboxId:8658846bdf398d3076e97d4f6b0a1407ea671749947a495e4d2870f126e9c8e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712788611014096175,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79f2270031ca8da755169edef48bb8
1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61456649ac3ae6a328789ca7aacd71dbf572c1e2af8b98dc1ebc2a7b6dc63fdd,PodSandboxId:556d6b70f03b84d0911d9231a7d6c440b06c57077911a1d7635bb466753a8e61,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712788611027940741,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300254c7602c01a47d7c7b015d2c108b,},Annotations:ma
p[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10329215440e7807f796ae1783552d6b7498727ff904355fafaa66a8e1c74966,PodSandboxId:f80b1894e84960553130cd0e87d1d81676afccccc9ce5aa3d55a571b17bdd3bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712788610998915008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de360ec12402802bf8613b64a97ba7a,},Annotations:map[string]string{io
.kubernetes.container.hash: 7b34b2b8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d859f4900e3c62aaeb3e33febb019653b89218111578d09f1acc555d215905ee,PodSandboxId:555e291ef738bf0a255cfc5ab3adac30c356b538e791098ec1b55f853970875b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712788606208819446,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5rmsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e7d0245-1820-426e-8a54-a1df3db2c2a4,},Annotations:map[string]string{io.kubernetes.container.hash: 182b02
b1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55b7d5868c6fde665f9fae9d132c63b6f5753ee0ea0360723096c2fd8273c5e1,PodSandboxId:101f9e02523f5940b556e93bd5de9c016dae495e23cb2143cb675275d1a51054,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712788604241700297,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbca0d000d237fc40d9ef2ad258eb84,},Annotations:map[string]string{io.kubernetes.container.hash: 356c006c,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7919dde648e62a3b6cfbaa50cf895abdaaef02d4fb09b084593d54b66a62c43c,PodSandboxId:b19e372d535607bb8671b31b11dd0453a68ab8c49c31de40fc1e84a679f82352,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712788604115576044,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79f2270031ca8da755169edef48bb81,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d5d2920fb5e11e0bbd6eb4650d80e25e82853103b66f052f118117af38bb641,PodSandboxId:1e5e7041edf50f9ed552e15076c03d560d52bdcdc2d2c6275a6671e01ea44248,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712788604142152927,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de360ec12402802bf8613b64a97ba7a,},Annotations:map[string]string{io.kubernetes.container.hash: 7b34b2b8,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ec4d9a0df94625ef5e7a4bd667bf67d19db4d6651500972577fa4eb5a3cb3ea,PodSandboxId:271b003eed7c9398b993bdec7fb35adffe097b6c65a75a2ede7a454185d7b4c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712788604044386398,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300254c7602c01a47d7c7b015d2c108b,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae010ac08bc1411c206e818b06d0d211b7506c5fef244d38698f4920531d794,PodSandboxId:e5edb36519bc4a1aef8e68b54a0ac0c763748d9d23385619b19f20200fe963bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712788544036674650,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ngdgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34376a83-ecec-4874-8fa9-653b3ba7a8fb,},Annotations:map[string]string{io.kubernetes.container.hash: cfd390af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=89e8a18d-cb74-4ea5-a166-0479e2dfaa9f name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:37:16 pause-262675 crio[2878]: time="2024-04-10 22:37:16.702870233Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6de0a1c8-b96d-429a-b3b5-bde18d96c184 name=/runtime.v1.RuntimeService/Version
	Apr 10 22:37:16 pause-262675 crio[2878]: time="2024-04-10 22:37:16.702944760Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6de0a1c8-b96d-429a-b3b5-bde18d96c184 name=/runtime.v1.RuntimeService/Version
	Apr 10 22:37:16 pause-262675 crio[2878]: time="2024-04-10 22:37:16.704362797Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=85b9bebe-297a-491e-92f2-2f37ed971cca name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:37:16 pause-262675 crio[2878]: time="2024-04-10 22:37:16.704719514Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712788636704699980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=85b9bebe-297a-491e-92f2-2f37ed971cca name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:37:16 pause-262675 crio[2878]: time="2024-04-10 22:37:16.705321486Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e2c8c60e-79a8-453a-ab0f-560c83e1b8cb name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:37:16 pause-262675 crio[2878]: time="2024-04-10 22:37:16.705376297Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e2c8c60e-79a8-453a-ab0f-560c83e1b8cb name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:37:16 pause-262675 crio[2878]: time="2024-04-10 22:37:16.705609597Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fbebdd4cfc4eca7e3dc8fb95f76c48ebd519ec863c867415e510b74e750d3c38,PodSandboxId:837cf686abcff65dc7b800b587b093a2081b1fa50952fae4b0c85a62348bdb83,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712788616260958041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ngdgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34376a83-ecec-4874-8fa9-653b3ba7a8fb,},Annotations:map[string]string{io.kubernetes.container.hash: cfd390af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c404daec01843347ac717ed0e35a18a21e313ef844ff10fa1c6d555de5c1aa3d,PodSandboxId:555e291ef738bf0a255cfc5ab3adac30c356b538e791098ec1b55f853970875b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712788615811599269,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5rmsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 0e7d0245-1820-426e-8a54-a1df3db2c2a4,},Annotations:map[string]string{io.kubernetes.container.hash: 182b02b1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b1ca2a6bb13e3d051425ba5ffd9d607aff8b3e63d63b4f36c2b70492432712,PodSandboxId:429cbb872f578512fe53c1ece73ed78a320d7c1afd42cf4ef85d4fda4a80289a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712788611029673869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbca0d000d237fc40d9ef2ad258eb84,},Annot
ations:map[string]string{io.kubernetes.container.hash: 356c006c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45b4c8ba68eda49b3fc10e4b005785a90c5375c5d84d4f3c8c3290cffdc9b02f,PodSandboxId:8658846bdf398d3076e97d4f6b0a1407ea671749947a495e4d2870f126e9c8e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712788611014096175,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79f2270031ca8da755169edef48bb8
1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61456649ac3ae6a328789ca7aacd71dbf572c1e2af8b98dc1ebc2a7b6dc63fdd,PodSandboxId:556d6b70f03b84d0911d9231a7d6c440b06c57077911a1d7635bb466753a8e61,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712788611027940741,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300254c7602c01a47d7c7b015d2c108b,},Annotations:ma
p[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10329215440e7807f796ae1783552d6b7498727ff904355fafaa66a8e1c74966,PodSandboxId:f80b1894e84960553130cd0e87d1d81676afccccc9ce5aa3d55a571b17bdd3bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712788610998915008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de360ec12402802bf8613b64a97ba7a,},Annotations:map[string]string{io
.kubernetes.container.hash: 7b34b2b8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d859f4900e3c62aaeb3e33febb019653b89218111578d09f1acc555d215905ee,PodSandboxId:555e291ef738bf0a255cfc5ab3adac30c356b538e791098ec1b55f853970875b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712788606208819446,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5rmsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e7d0245-1820-426e-8a54-a1df3db2c2a4,},Annotations:map[string]string{io.kubernetes.container.hash: 182b02
b1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55b7d5868c6fde665f9fae9d132c63b6f5753ee0ea0360723096c2fd8273c5e1,PodSandboxId:101f9e02523f5940b556e93bd5de9c016dae495e23cb2143cb675275d1a51054,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712788604241700297,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbca0d000d237fc40d9ef2ad258eb84,},Annotations:map[string]string{io.kubernetes.container.hash: 356c006c,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7919dde648e62a3b6cfbaa50cf895abdaaef02d4fb09b084593d54b66a62c43c,PodSandboxId:b19e372d535607bb8671b31b11dd0453a68ab8c49c31de40fc1e84a679f82352,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712788604115576044,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79f2270031ca8da755169edef48bb81,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d5d2920fb5e11e0bbd6eb4650d80e25e82853103b66f052f118117af38bb641,PodSandboxId:1e5e7041edf50f9ed552e15076c03d560d52bdcdc2d2c6275a6671e01ea44248,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712788604142152927,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de360ec12402802bf8613b64a97ba7a,},Annotations:map[string]string{io.kubernetes.container.hash: 7b34b2b8,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ec4d9a0df94625ef5e7a4bd667bf67d19db4d6651500972577fa4eb5a3cb3ea,PodSandboxId:271b003eed7c9398b993bdec7fb35adffe097b6c65a75a2ede7a454185d7b4c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712788604044386398,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-262675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300254c7602c01a47d7c7b015d2c108b,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae010ac08bc1411c206e818b06d0d211b7506c5fef244d38698f4920531d794,PodSandboxId:e5edb36519bc4a1aef8e68b54a0ac0c763748d9d23385619b19f20200fe963bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712788544036674650,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ngdgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34376a83-ecec-4874-8fa9-653b3ba7a8fb,},Annotations:map[string]string{io.kubernetes.container.hash: cfd390af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e2c8c60e-79a8-453a-ab0f-560c83e1b8cb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	fbebdd4cfc4ec       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   20 seconds ago       Running             coredns                   1                   837cf686abcff       coredns-76f75df574-ngdgs
	c404daec01843       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   20 seconds ago       Running             kube-proxy                2                   555e291ef738b       kube-proxy-5rmsk
	b1b1ca2a6bb13       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   25 seconds ago       Running             etcd                      2                   429cbb872f578       etcd-pause-262675
	61456649ac3ae       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   25 seconds ago       Running             kube-scheduler            2                   556d6b70f03b8       kube-scheduler-pause-262675
	45b4c8ba68eda       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   25 seconds ago       Running             kube-controller-manager   2                   8658846bdf398       kube-controller-manager-pause-262675
	10329215440e7       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   25 seconds ago       Running             kube-apiserver            2                   f80b1894e8496       kube-apiserver-pause-262675
	d859f4900e3c6       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   30 seconds ago       Exited              kube-proxy                1                   555e291ef738b       kube-proxy-5rmsk
	55b7d5868c6fd       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   32 seconds ago       Exited              etcd                      1                   101f9e02523f5       etcd-pause-262675
	0d5d2920fb5e1       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   32 seconds ago       Exited              kube-apiserver            1                   1e5e7041edf50       kube-apiserver-pause-262675
	7919dde648e62       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   32 seconds ago       Exited              kube-controller-manager   1                   b19e372d53560       kube-controller-manager-pause-262675
	4ec4d9a0df946       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   32 seconds ago       Exited              kube-scheduler            1                   271b003eed7c9       kube-scheduler-pause-262675
	dae010ac08bc1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   e5edb36519bc4       coredns-76f75df574-ngdgs
	
	
	==> coredns [dae010ac08bc1411c206e818b06d0d211b7506c5fef244d38698f4920531d794] <==
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[605284717]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Apr-2024 22:35:44.411) (total time: 30004ms):
	Trace[605284717]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (22:36:14.414)
	Trace[605284717]: [30.004976889s] [30.004976889s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[93682609]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Apr-2024 22:35:44.411) (total time: 30004ms):
	Trace[93682609]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (22:36:14.415)
	Trace[93682609]: [30.004805194s] [30.004805194s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[460900663]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Apr-2024 22:35:44.414) (total time: 30002ms):
	Trace[460900663]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (22:36:14.415)
	Trace[460900663]: [30.002659203s] [30.002659203s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	[INFO] 127.0.0.1:36041 - 31939 "HINFO IN 6584311188470642347.8290014462332789187. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009824263s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [fbebdd4cfc4eca7e3dc8fb95f76c48ebd519ec863c867415e510b74e750d3c38] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58098 - 51747 "HINFO IN 2299870771224363883.213556319773758455. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.00800984s
	
	
	==> describe nodes <==
	Name:               pause-262675
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-262675
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2
	                    minikube.k8s.io/name=pause-262675
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_10T22_35_30_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Apr 2024 22:35:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-262675
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Apr 2024 22:37:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Apr 2024 22:36:54 +0000   Wed, 10 Apr 2024 22:35:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Apr 2024 22:36:54 +0000   Wed, 10 Apr 2024 22:35:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Apr 2024 22:36:54 +0000   Wed, 10 Apr 2024 22:35:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Apr 2024 22:36:54 +0000   Wed, 10 Apr 2024 22:35:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.144
	  Hostname:    pause-262675
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 4f029c4663934ce8a0b7057e37231fe4
	  System UUID:                4f029c46-6393-4ce8-a0b7-057e37231fe4
	  Boot ID:                    b4b87e6c-05f4-4a14-8ff6-6daa0be024c1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-ngdgs                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     94s
	  kube-system                 etcd-pause-262675                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         107s
	  kube-system                 kube-apiserver-pause-262675             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 kube-controller-manager-pause-262675    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 kube-proxy-5rmsk                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 kube-scheduler-pause-262675             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 92s                kube-proxy       
	  Normal  Starting                 20s                kube-proxy       
	  Normal  NodeHasSufficientPID     107s               kubelet          Node pause-262675 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  107s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  107s               kubelet          Node pause-262675 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s               kubelet          Node pause-262675 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                107s               kubelet          Node pause-262675 status is now: NodeReady
	  Normal  Starting                 107s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           96s                node-controller  Node pause-262675 event: Registered Node pause-262675 in Controller
	  Normal  Starting                 27s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s (x8 over 27s)  kubelet          Node pause-262675 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s (x8 over 27s)  kubelet          Node pause-262675 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x7 over 27s)  kubelet          Node pause-262675 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10s                node-controller  Node pause-262675 event: Registered Node pause-262675 in Controller
	
	
	==> dmesg <==
	[  +0.062198] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.080923] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.203694] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.153204] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.320064] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.772668] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +0.070450] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.931993] systemd-fstab-generator[940]: Ignoring "noauto" option for root device
	[  +1.164325] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.175449] systemd-fstab-generator[1275]: Ignoring "noauto" option for root device
	[  +0.081200] kauditd_printk_skb: 30 callbacks suppressed
	[ +12.851405] systemd-fstab-generator[1488]: Ignoring "noauto" option for root device
	[  +0.138694] kauditd_printk_skb: 21 callbacks suppressed
	[Apr10 22:36] kauditd_printk_skb: 96 callbacks suppressed
	[ +19.413607] systemd-fstab-generator[2355]: Ignoring "noauto" option for root device
	[  +0.155127] systemd-fstab-generator[2367]: Ignoring "noauto" option for root device
	[  +0.201353] systemd-fstab-generator[2381]: Ignoring "noauto" option for root device
	[  +0.153474] systemd-fstab-generator[2393]: Ignoring "noauto" option for root device
	[  +0.830747] systemd-fstab-generator[2637]: Ignoring "noauto" option for root device
	[  +1.211255] systemd-fstab-generator[2983]: Ignoring "noauto" option for root device
	[  +4.802557] systemd-fstab-generator[3350]: Ignoring "noauto" option for root device
	[  +0.077097] kauditd_printk_skb: 221 callbacks suppressed
	[  +5.531868] kauditd_printk_skb: 38 callbacks suppressed
	[Apr10 22:37] kauditd_printk_skb: 14 callbacks suppressed
	[  +2.629119] systemd-fstab-generator[3879]: Ignoring "noauto" option for root device
	
	
	==> etcd [55b7d5868c6fde665f9fae9d132c63b6f5753ee0ea0360723096c2fd8273c5e1] <==
	
	
	==> etcd [b1b1ca2a6bb13e3d051425ba5ffd9d607aff8b3e63d63b4f36c2b70492432712] <==
	{"level":"info","ts":"2024-04-10T22:36:51.470551Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-10T22:36:51.469773Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-04-10T22:36:51.469909Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-10T22:36:51.470646Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-10T22:36:51.470673Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-10T22:36:51.470185Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"89eeab852c889a12 switched to configuration voters=(9939070016119413266)"}
	{"level":"info","ts":"2024-04-10T22:36:51.471962Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd276b60e5eb7d71","local-member-id":"89eeab852c889a12","added-peer-id":"89eeab852c889a12","added-peer-peer-urls":["https://192.168.50.144:2380"]}
	{"level":"info","ts":"2024-04-10T22:36:51.472102Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd276b60e5eb7d71","local-member-id":"89eeab852c889a12","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-10T22:36:51.472149Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-10T22:36:51.470331Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.144:2380"}
	{"level":"info","ts":"2024-04-10T22:36:51.47463Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.144:2380"}
	{"level":"info","ts":"2024-04-10T22:36:53.252452Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"89eeab852c889a12 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-10T22:36:53.252525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"89eeab852c889a12 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-10T22:36:53.25259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"89eeab852c889a12 received MsgPreVoteResp from 89eeab852c889a12 at term 2"}
	{"level":"info","ts":"2024-04-10T22:36:53.252608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"89eeab852c889a12 became candidate at term 3"}
	{"level":"info","ts":"2024-04-10T22:36:53.252617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"89eeab852c889a12 received MsgVoteResp from 89eeab852c889a12 at term 3"}
	{"level":"info","ts":"2024-04-10T22:36:53.252629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"89eeab852c889a12 became leader at term 3"}
	{"level":"info","ts":"2024-04-10T22:36:53.252639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 89eeab852c889a12 elected leader 89eeab852c889a12 at term 3"}
	{"level":"info","ts":"2024-04-10T22:36:53.259907Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"89eeab852c889a12","local-member-attributes":"{Name:pause-262675 ClientURLs:[https://192.168.50.144:2379]}","request-path":"/0/members/89eeab852c889a12/attributes","cluster-id":"cd276b60e5eb7d71","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-10T22:36:53.260103Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-10T22:36:53.260211Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-10T22:36:53.260726Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-10T22:36:53.260798Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-10T22:36:53.262292Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.144:2379"}
	{"level":"info","ts":"2024-04-10T22:36:53.262486Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 22:37:17 up 2 min,  0 users,  load average: 1.37, 0.45, 0.16
	Linux pause-262675 5.10.207 #1 SMP Wed Apr 10 14:57:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0d5d2920fb5e11e0bbd6eb4650d80e25e82853103b66f052f118117af38bb641] <==
	
	
	==> kube-apiserver [10329215440e7807f796ae1783552d6b7498727ff904355fafaa66a8e1c74966] <==
	I0410 22:36:54.625922       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0410 22:36:54.649676       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0410 22:36:54.649709       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0410 22:36:54.707767       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0410 22:36:54.709223       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0410 22:36:54.712081       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0410 22:36:54.732078       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0410 22:36:54.732118       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0410 22:36:54.732208       1 shared_informer.go:318] Caches are synced for configmaps
	I0410 22:36:54.746110       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0410 22:36:54.749837       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0410 22:36:54.752111       1 aggregator.go:165] initial CRD sync complete...
	I0410 22:36:54.752150       1 autoregister_controller.go:141] Starting autoregister controller
	I0410 22:36:54.752158       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0410 22:36:54.752163       1 cache.go:39] Caches are synced for autoregister controller
	I0410 22:36:54.773523       1 shared_informer.go:318] Caches are synced for node_authorizer
	E0410 22:36:54.781694       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0410 22:36:55.618917       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0410 22:36:56.430740       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0410 22:36:56.444890       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0410 22:36:56.505180       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0410 22:36:56.537478       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0410 22:36:56.544672       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0410 22:37:07.446315       1 controller.go:624] quota admission added evaluator for: endpoints
	I0410 22:37:07.543556       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [45b4c8ba68eda49b3fc10e4b005785a90c5375c5d84d4f3c8c3290cffdc9b02f] <==
	I0410 22:37:07.419615       1 shared_informer.go:318] Caches are synced for legacy-service-account-token-cleaner
	I0410 22:37:07.422262       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0410 22:37:07.422300       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0410 22:37:07.423039       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0410 22:37:07.423098       1 shared_informer.go:318] Caches are synced for GC
	I0410 22:37:07.427012       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0410 22:37:07.431405       1 shared_informer.go:318] Caches are synced for endpoint
	I0410 22:37:07.438456       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0410 22:37:07.442363       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0410 22:37:07.450982       1 shared_informer.go:318] Caches are synced for taint
	I0410 22:37:07.451170       1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone=""
	I0410 22:37:07.451405       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-262675"
	I0410 22:37:07.451580       1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0410 22:37:07.451689       1 event.go:376] "Event occurred" object="pause-262675" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-262675 event: Registered Node pause-262675 in Controller"
	I0410 22:37:07.471940       1 shared_informer.go:318] Caches are synced for disruption
	I0410 22:37:07.480312       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0410 22:37:07.501179       1 shared_informer.go:318] Caches are synced for resource quota
	I0410 22:37:07.523393       1 shared_informer.go:318] Caches are synced for resource quota
	I0410 22:37:07.529767       1 shared_informer.go:318] Caches are synced for PV protection
	I0410 22:37:07.534202       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0410 22:37:07.610592       1 shared_informer.go:318] Caches are synced for attach detach
	I0410 22:37:07.614842       1 shared_informer.go:318] Caches are synced for persistent volume
	I0410 22:37:07.965537       1 shared_informer.go:318] Caches are synced for garbage collector
	I0410 22:37:08.018301       1 shared_informer.go:318] Caches are synced for garbage collector
	I0410 22:37:08.018347       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	
	==> kube-controller-manager [7919dde648e62a3b6cfbaa50cf895abdaaef02d4fb09b084593d54b66a62c43c] <==
	
	
	==> kube-proxy [c404daec01843347ac717ed0e35a18a21e313ef844ff10fa1c6d555de5c1aa3d] <==
	I0410 22:36:56.080181       1 server_others.go:72] "Using iptables proxy"
	I0410 22:36:56.099632       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.50.144"]
	I0410 22:36:56.203306       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0410 22:36:56.203369       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0410 22:36:56.203390       1 server_others.go:168] "Using iptables Proxier"
	I0410 22:36:56.219364       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0410 22:36:56.223586       1 server.go:865] "Version info" version="v1.29.3"
	I0410 22:36:56.223625       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0410 22:36:56.224828       1 config.go:188] "Starting service config controller"
	I0410 22:36:56.224881       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0410 22:36:56.224917       1 config.go:97] "Starting endpoint slice config controller"
	I0410 22:36:56.224921       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0410 22:36:56.234668       1 config.go:315] "Starting node config controller"
	I0410 22:36:56.234699       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0410 22:36:56.325011       1 shared_informer.go:318] Caches are synced for service config
	I0410 22:36:56.325163       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0410 22:36:56.335362       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [d859f4900e3c62aaeb3e33febb019653b89218111578d09f1acc555d215905ee] <==
	I0410 22:36:46.564158       1 server_others.go:72] "Using iptables proxy"
	E0410 22:36:46.566982       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-262675\": dial tcp 192.168.50.144:8443: connect: connection refused"
	
	
	==> kube-scheduler [4ec4d9a0df94625ef5e7a4bd667bf67d19db4d6651500972577fa4eb5a3cb3ea] <==
	
	
	==> kube-scheduler [61456649ac3ae6a328789ca7aacd71dbf572c1e2af8b98dc1ebc2a7b6dc63fdd] <==
	I0410 22:36:52.164313       1 serving.go:380] Generated self-signed cert in-memory
	W0410 22:36:54.661172       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0410 22:36:54.661351       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0410 22:36:54.661388       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0410 22:36:54.661411       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0410 22:36:54.710423       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0410 22:36:54.710550       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0410 22:36:54.735689       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0410 22:36:54.738316       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0410 22:36:54.747160       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0410 22:36:54.747363       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0410 22:36:54.849327       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 10 22:36:50 pause-262675 kubelet[3357]: I0410 22:36:50.741636    3357 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d79f2270031ca8da755169edef48bb81-flexvolume-dir\") pod \"kube-controller-manager-pause-262675\" (UID: \"d79f2270031ca8da755169edef48bb81\") " pod="kube-system/kube-controller-manager-pause-262675"
	Apr 10 22:36:50 pause-262675 kubelet[3357]: I0410 22:36:50.833443    3357 kubelet_node_status.go:73] "Attempting to register node" node="pause-262675"
	Apr 10 22:36:50 pause-262675 kubelet[3357]: E0410 22:36:50.834435    3357 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.144:8443: connect: connection refused" node="pause-262675"
	Apr 10 22:36:50 pause-262675 kubelet[3357]: I0410 22:36:50.976785    3357 scope.go:117] "RemoveContainer" containerID="55b7d5868c6fde665f9fae9d132c63b6f5753ee0ea0360723096c2fd8273c5e1"
	Apr 10 22:36:50 pause-262675 kubelet[3357]: I0410 22:36:50.977989    3357 scope.go:117] "RemoveContainer" containerID="0d5d2920fb5e11e0bbd6eb4650d80e25e82853103b66f052f118117af38bb641"
	Apr 10 22:36:50 pause-262675 kubelet[3357]: I0410 22:36:50.979378    3357 scope.go:117] "RemoveContainer" containerID="7919dde648e62a3b6cfbaa50cf895abdaaef02d4fb09b084593d54b66a62c43c"
	Apr 10 22:36:50 pause-262675 kubelet[3357]: I0410 22:36:50.980606    3357 scope.go:117] "RemoveContainer" containerID="4ec4d9a0df94625ef5e7a4bd667bf67d19db4d6651500972577fa4eb5a3cb3ea"
	Apr 10 22:36:51 pause-262675 kubelet[3357]: E0410 22:36:51.136164    3357 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-262675?timeout=10s\": dial tcp 192.168.50.144:8443: connect: connection refused" interval="800ms"
	Apr 10 22:36:51 pause-262675 kubelet[3357]: I0410 22:36:51.236763    3357 kubelet_node_status.go:73] "Attempting to register node" node="pause-262675"
	Apr 10 22:36:51 pause-262675 kubelet[3357]: E0410 22:36:51.237928    3357 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.144:8443: connect: connection refused" node="pause-262675"
	Apr 10 22:36:51 pause-262675 kubelet[3357]: W0410 22:36:51.383552    3357 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-262675&limit=500&resourceVersion=0": dial tcp 192.168.50.144:8443: connect: connection refused
	Apr 10 22:36:51 pause-262675 kubelet[3357]: E0410 22:36:51.383626    3357 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-262675&limit=500&resourceVersion=0": dial tcp 192.168.50.144:8443: connect: connection refused
	Apr 10 22:36:52 pause-262675 kubelet[3357]: I0410 22:36:52.039994    3357 kubelet_node_status.go:73] "Attempting to register node" node="pause-262675"
	Apr 10 22:36:54 pause-262675 kubelet[3357]: I0410 22:36:54.787960    3357 kubelet_node_status.go:112] "Node was previously registered" node="pause-262675"
	Apr 10 22:36:54 pause-262675 kubelet[3357]: I0410 22:36:54.788072    3357 kubelet_node_status.go:76] "Successfully registered node" node="pause-262675"
	Apr 10 22:36:54 pause-262675 kubelet[3357]: I0410 22:36:54.790221    3357 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 10 22:36:54 pause-262675 kubelet[3357]: I0410 22:36:54.791337    3357 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 10 22:36:55 pause-262675 kubelet[3357]: E0410 22:36:55.435393    3357 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-pause-262675\" already exists" pod="kube-system/kube-controller-manager-pause-262675"
	Apr 10 22:36:55 pause-262675 kubelet[3357]: I0410 22:36:55.493424    3357 apiserver.go:52] "Watching apiserver"
	Apr 10 22:36:55 pause-262675 kubelet[3357]: I0410 22:36:55.496931    3357 topology_manager.go:215] "Topology Admit Handler" podUID="0e7d0245-1820-426e-8a54-a1df3db2c2a4" podNamespace="kube-system" podName="kube-proxy-5rmsk"
	Apr 10 22:36:55 pause-262675 kubelet[3357]: I0410 22:36:55.497110    3357 topology_manager.go:215] "Topology Admit Handler" podUID="34376a83-ecec-4874-8fa9-653b3ba7a8fb" podNamespace="kube-system" podName="coredns-76f75df574-ngdgs"
	Apr 10 22:36:55 pause-262675 kubelet[3357]: I0410 22:36:55.524666    3357 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Apr 10 22:36:55 pause-262675 kubelet[3357]: I0410 22:36:55.577463    3357 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e7d0245-1820-426e-8a54-a1df3db2c2a4-xtables-lock\") pod \"kube-proxy-5rmsk\" (UID: \"0e7d0245-1820-426e-8a54-a1df3db2c2a4\") " pod="kube-system/kube-proxy-5rmsk"
	Apr 10 22:36:55 pause-262675 kubelet[3357]: I0410 22:36:55.577660    3357 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e7d0245-1820-426e-8a54-a1df3db2c2a4-lib-modules\") pod \"kube-proxy-5rmsk\" (UID: \"0e7d0245-1820-426e-8a54-a1df3db2c2a4\") " pod="kube-system/kube-proxy-5rmsk"
	Apr 10 22:36:55 pause-262675 kubelet[3357]: I0410 22:36:55.798867    3357 scope.go:117] "RemoveContainer" containerID="d859f4900e3c62aaeb3e33febb019653b89218111578d09f1acc555d215905ee"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-262675 -n pause-262675
helpers_test.go:261: (dbg) Run:  kubectl --context pause-262675 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (52.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (271.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-862528 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-862528 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m31.00069561s)

                                                
                                                
-- stdout --
	* [old-k8s-version-862528] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18610
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-862528" primary control-plane node in "old-k8s-version-862528" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0410 22:37:55.119212   54076 out.go:291] Setting OutFile to fd 1 ...
	I0410 22:37:55.119636   54076 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:37:55.119650   54076 out.go:304] Setting ErrFile to fd 2...
	I0410 22:37:55.119658   54076 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:37:55.120085   54076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 22:37:55.121392   54076 out.go:298] Setting JSON to false
	I0410 22:37:55.122856   54076 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4817,"bootTime":1712783858,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0410 22:37:55.122943   54076 start.go:139] virtualization: kvm guest
	I0410 22:37:55.125271   54076 out.go:177] * [old-k8s-version-862528] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0410 22:37:55.126635   54076 out.go:177]   - MINIKUBE_LOCATION=18610
	I0410 22:37:55.126634   54076 notify.go:220] Checking for updates...
	I0410 22:37:55.127984   54076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0410 22:37:55.129481   54076 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:37:55.131025   54076 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 22:37:55.132865   54076 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0410 22:37:55.134431   54076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0410 22:37:55.136384   54076 config.go:182] Loaded profile config "cert-expiration-464519": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:37:55.136549   54076 config.go:182] Loaded profile config "cert-options-849843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:37:55.136651   54076 config.go:182] Loaded profile config "kubernetes-upgrade-407031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0410 22:37:55.136779   54076 driver.go:392] Setting default libvirt URI to qemu:///system
	I0410 22:37:55.174500   54076 out.go:177] * Using the kvm2 driver based on user configuration
	I0410 22:37:55.175929   54076 start.go:297] selected driver: kvm2
	I0410 22:37:55.175946   54076 start.go:901] validating driver "kvm2" against <nil>
	I0410 22:37:55.175962   54076 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0410 22:37:55.177013   54076 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 22:37:55.177116   54076 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18610-5679/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0410 22:37:55.193440   54076 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0410 22:37:55.193495   54076 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0410 22:37:55.193688   54076 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 22:37:55.193755   54076 cni.go:84] Creating CNI manager for ""
	I0410 22:37:55.193767   54076 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:37:55.193773   54076 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0410 22:37:55.193818   54076 start.go:340] cluster config:
	{Name:old-k8s-version-862528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-862528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:37:55.193912   54076 iso.go:125] acquiring lock: {Name:mk5a268a8a4dbf02629446a098c1de60574dce10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 22:37:55.195762   54076 out.go:177] * Starting "old-k8s-version-862528" primary control-plane node in "old-k8s-version-862528" cluster
	I0410 22:37:55.196781   54076 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0410 22:37:55.196820   54076 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0410 22:37:55.196833   54076 cache.go:56] Caching tarball of preloaded images
	I0410 22:37:55.196927   54076 preload.go:173] Found /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0410 22:37:55.196949   54076 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0410 22:37:55.197054   54076 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/config.json ...
	I0410 22:37:55.197081   54076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/config.json: {Name:mk2b0c2999fe7f286b4293fdc459d3d508eef116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:37:55.197234   54076 start.go:360] acquireMachinesLock for old-k8s-version-862528: {Name:mkcfcfa8b01b63726303cd37d33f01d452118635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0410 22:37:55.197296   54076 start.go:364] duration metric: took 42.873µs to acquireMachinesLock for "old-k8s-version-862528"
	I0410 22:37:55.197332   54076 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-862528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-862528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0410 22:37:55.197425   54076 start.go:125] createHost starting for "" (driver="kvm2")
	I0410 22:37:55.198904   54076 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0410 22:37:55.199044   54076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:37:55.199086   54076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:37:55.213487   54076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44971
	I0410 22:37:55.213966   54076 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:37:55.214502   54076 main.go:141] libmachine: Using API Version  1
	I0410 22:37:55.214523   54076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:37:55.214876   54076 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:37:55.215125   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetMachineName
	I0410 22:37:55.215312   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:37:55.215478   54076 start.go:159] libmachine.API.Create for "old-k8s-version-862528" (driver="kvm2")
	I0410 22:37:55.215512   54076 client.go:168] LocalClient.Create starting
	I0410 22:37:55.215546   54076 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem
	I0410 22:37:55.215593   54076 main.go:141] libmachine: Decoding PEM data...
	I0410 22:37:55.215616   54076 main.go:141] libmachine: Parsing certificate...
	I0410 22:37:55.215678   54076 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem
	I0410 22:37:55.215697   54076 main.go:141] libmachine: Decoding PEM data...
	I0410 22:37:55.215711   54076 main.go:141] libmachine: Parsing certificate...
	I0410 22:37:55.215727   54076 main.go:141] libmachine: Running pre-create checks...
	I0410 22:37:55.215742   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .PreCreateCheck
	I0410 22:37:55.216067   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetConfigRaw
	I0410 22:37:55.216532   54076 main.go:141] libmachine: Creating machine...
	I0410 22:37:55.216548   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .Create
	I0410 22:37:55.216708   54076 main.go:141] libmachine: (old-k8s-version-862528) Creating KVM machine...
	I0410 22:37:55.218052   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | found existing default KVM network
	I0410 22:37:55.219655   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:37:55.219476   54099 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:95:74:01} reservation:<nil>}
	I0410 22:37:55.220637   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:37:55.220555   54099 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:29:e4:b6} reservation:<nil>}
	I0410 22:37:55.221709   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:37:55.221601   54099 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000289340}
	I0410 22:37:55.221735   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | created network xml: 
	I0410 22:37:55.221749   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | <network>
	I0410 22:37:55.221772   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG |   <name>mk-old-k8s-version-862528</name>
	I0410 22:37:55.221788   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG |   <dns enable='no'/>
	I0410 22:37:55.221794   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG |   
	I0410 22:37:55.221807   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0410 22:37:55.221817   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG |     <dhcp>
	I0410 22:37:55.221831   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0410 22:37:55.221844   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG |     </dhcp>
	I0410 22:37:55.221857   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG |   </ip>
	I0410 22:37:55.221864   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG |   
	I0410 22:37:55.221876   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | </network>
	I0410 22:37:55.221883   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | 
	I0410 22:37:55.226932   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | trying to create private KVM network mk-old-k8s-version-862528 192.168.61.0/24...
	I0410 22:37:55.304525   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | private KVM network mk-old-k8s-version-862528 192.168.61.0/24 created
	I0410 22:37:55.304562   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:37:55.304447   54099 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 22:37:55.304575   54076 main.go:141] libmachine: (old-k8s-version-862528) Setting up store path in /home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528 ...
	I0410 22:37:55.304595   54076 main.go:141] libmachine: (old-k8s-version-862528) Building disk image from file:///home/jenkins/minikube-integration/18610-5679/.minikube/cache/iso/amd64/minikube-v1.33.0-1712743565-18610-amd64.iso
	I0410 22:37:55.304608   54076 main.go:141] libmachine: (old-k8s-version-862528) Downloading /home/jenkins/minikube-integration/18610-5679/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18610-5679/.minikube/cache/iso/amd64/minikube-v1.33.0-1712743565-18610-amd64.iso...
	I0410 22:37:55.546152   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:37:55.546051   54099 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa...
	I0410 22:37:55.846477   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:37:55.846332   54099 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/old-k8s-version-862528.rawdisk...
	I0410 22:37:55.846511   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | Writing magic tar header
	I0410 22:37:55.846530   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | Writing SSH key tar header
	I0410 22:37:55.846542   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:37:55.846512   54099 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528 ...
	I0410 22:37:55.846649   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528
	I0410 22:37:55.846683   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18610-5679/.minikube/machines
	I0410 22:37:55.846721   54076 main.go:141] libmachine: (old-k8s-version-862528) Setting executable bit set on /home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528 (perms=drwx------)
	I0410 22:37:55.846736   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 22:37:55.846755   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18610-5679
	I0410 22:37:55.846770   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0410 22:37:55.846785   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | Checking permissions on dir: /home/jenkins
	I0410 22:37:55.846803   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | Checking permissions on dir: /home
	I0410 22:37:55.846814   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | Skipping /home - not owner
	I0410 22:37:55.846829   54076 main.go:141] libmachine: (old-k8s-version-862528) Setting executable bit set on /home/jenkins/minikube-integration/18610-5679/.minikube/machines (perms=drwxr-xr-x)
	I0410 22:37:55.846844   54076 main.go:141] libmachine: (old-k8s-version-862528) Setting executable bit set on /home/jenkins/minikube-integration/18610-5679/.minikube (perms=drwxr-xr-x)
	I0410 22:37:55.846861   54076 main.go:141] libmachine: (old-k8s-version-862528) Setting executable bit set on /home/jenkins/minikube-integration/18610-5679 (perms=drwxrwxr-x)
	I0410 22:37:55.846872   54076 main.go:141] libmachine: (old-k8s-version-862528) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0410 22:37:55.846892   54076 main.go:141] libmachine: (old-k8s-version-862528) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0410 22:37:55.846923   54076 main.go:141] libmachine: (old-k8s-version-862528) Creating domain...
	I0410 22:37:55.848056   54076 main.go:141] libmachine: (old-k8s-version-862528) define libvirt domain using xml: 
	I0410 22:37:55.848077   54076 main.go:141] libmachine: (old-k8s-version-862528) <domain type='kvm'>
	I0410 22:37:55.848106   54076 main.go:141] libmachine: (old-k8s-version-862528)   <name>old-k8s-version-862528</name>
	I0410 22:37:55.848121   54076 main.go:141] libmachine: (old-k8s-version-862528)   <memory unit='MiB'>2200</memory>
	I0410 22:37:55.848130   54076 main.go:141] libmachine: (old-k8s-version-862528)   <vcpu>2</vcpu>
	I0410 22:37:55.848137   54076 main.go:141] libmachine: (old-k8s-version-862528)   <features>
	I0410 22:37:55.848145   54076 main.go:141] libmachine: (old-k8s-version-862528)     <acpi/>
	I0410 22:37:55.848164   54076 main.go:141] libmachine: (old-k8s-version-862528)     <apic/>
	I0410 22:37:55.848176   54076 main.go:141] libmachine: (old-k8s-version-862528)     <pae/>
	I0410 22:37:55.848183   54076 main.go:141] libmachine: (old-k8s-version-862528)     
	I0410 22:37:55.848193   54076 main.go:141] libmachine: (old-k8s-version-862528)   </features>
	I0410 22:37:55.848202   54076 main.go:141] libmachine: (old-k8s-version-862528)   <cpu mode='host-passthrough'>
	I0410 22:37:55.848213   54076 main.go:141] libmachine: (old-k8s-version-862528)   
	I0410 22:37:55.848220   54076 main.go:141] libmachine: (old-k8s-version-862528)   </cpu>
	I0410 22:37:55.848229   54076 main.go:141] libmachine: (old-k8s-version-862528)   <os>
	I0410 22:37:55.848236   54076 main.go:141] libmachine: (old-k8s-version-862528)     <type>hvm</type>
	I0410 22:37:55.848273   54076 main.go:141] libmachine: (old-k8s-version-862528)     <boot dev='cdrom'/>
	I0410 22:37:55.848281   54076 main.go:141] libmachine: (old-k8s-version-862528)     <boot dev='hd'/>
	I0410 22:37:55.848297   54076 main.go:141] libmachine: (old-k8s-version-862528)     <bootmenu enable='no'/>
	I0410 22:37:55.848304   54076 main.go:141] libmachine: (old-k8s-version-862528)   </os>
	I0410 22:37:55.848312   54076 main.go:141] libmachine: (old-k8s-version-862528)   <devices>
	I0410 22:37:55.848319   54076 main.go:141] libmachine: (old-k8s-version-862528)     <disk type='file' device='cdrom'>
	I0410 22:37:55.848333   54076 main.go:141] libmachine: (old-k8s-version-862528)       <source file='/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/boot2docker.iso'/>
	I0410 22:37:55.848341   54076 main.go:141] libmachine: (old-k8s-version-862528)       <target dev='hdc' bus='scsi'/>
	I0410 22:37:55.848351   54076 main.go:141] libmachine: (old-k8s-version-862528)       <readonly/>
	I0410 22:37:55.848358   54076 main.go:141] libmachine: (old-k8s-version-862528)     </disk>
	I0410 22:37:55.848367   54076 main.go:141] libmachine: (old-k8s-version-862528)     <disk type='file' device='disk'>
	I0410 22:37:55.848375   54076 main.go:141] libmachine: (old-k8s-version-862528)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0410 22:37:55.848388   54076 main.go:141] libmachine: (old-k8s-version-862528)       <source file='/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/old-k8s-version-862528.rawdisk'/>
	I0410 22:37:55.848407   54076 main.go:141] libmachine: (old-k8s-version-862528)       <target dev='hda' bus='virtio'/>
	I0410 22:37:55.848415   54076 main.go:141] libmachine: (old-k8s-version-862528)     </disk>
	I0410 22:37:55.848423   54076 main.go:141] libmachine: (old-k8s-version-862528)     <interface type='network'>
	I0410 22:37:55.848432   54076 main.go:141] libmachine: (old-k8s-version-862528)       <source network='mk-old-k8s-version-862528'/>
	I0410 22:37:55.848439   54076 main.go:141] libmachine: (old-k8s-version-862528)       <model type='virtio'/>
	I0410 22:37:55.848447   54076 main.go:141] libmachine: (old-k8s-version-862528)     </interface>
	I0410 22:37:55.848459   54076 main.go:141] libmachine: (old-k8s-version-862528)     <interface type='network'>
	I0410 22:37:55.848468   54076 main.go:141] libmachine: (old-k8s-version-862528)       <source network='default'/>
	I0410 22:37:55.848480   54076 main.go:141] libmachine: (old-k8s-version-862528)       <model type='virtio'/>
	I0410 22:37:55.848491   54076 main.go:141] libmachine: (old-k8s-version-862528)     </interface>
	I0410 22:37:55.848498   54076 main.go:141] libmachine: (old-k8s-version-862528)     <serial type='pty'>
	I0410 22:37:55.848528   54076 main.go:141] libmachine: (old-k8s-version-862528)       <target port='0'/>
	I0410 22:37:55.848555   54076 main.go:141] libmachine: (old-k8s-version-862528)     </serial>
	I0410 22:37:55.848569   54076 main.go:141] libmachine: (old-k8s-version-862528)     <console type='pty'>
	I0410 22:37:55.848578   54076 main.go:141] libmachine: (old-k8s-version-862528)       <target type='serial' port='0'/>
	I0410 22:37:55.848600   54076 main.go:141] libmachine: (old-k8s-version-862528)     </console>
	I0410 22:37:55.848624   54076 main.go:141] libmachine: (old-k8s-version-862528)     <rng model='virtio'>
	I0410 22:37:55.848640   54076 main.go:141] libmachine: (old-k8s-version-862528)       <backend model='random'>/dev/random</backend>
	I0410 22:37:55.848660   54076 main.go:141] libmachine: (old-k8s-version-862528)     </rng>
	I0410 22:37:55.848671   54076 main.go:141] libmachine: (old-k8s-version-862528)     
	I0410 22:37:55.848681   54076 main.go:141] libmachine: (old-k8s-version-862528)     
	I0410 22:37:55.848690   54076 main.go:141] libmachine: (old-k8s-version-862528)   </devices>
	I0410 22:37:55.848701   54076 main.go:141] libmachine: (old-k8s-version-862528) </domain>
	I0410 22:37:55.848714   54076 main.go:141] libmachine: (old-k8s-version-862528) 
	I0410 22:37:55.853584   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:c8:19:b1 in network default
	I0410 22:37:55.854426   54076 main.go:141] libmachine: (old-k8s-version-862528) Ensuring networks are active...
	I0410 22:37:55.854454   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:37:55.855362   54076 main.go:141] libmachine: (old-k8s-version-862528) Ensuring network default is active
	I0410 22:37:55.855833   54076 main.go:141] libmachine: (old-k8s-version-862528) Ensuring network mk-old-k8s-version-862528 is active
	I0410 22:37:55.856721   54076 main.go:141] libmachine: (old-k8s-version-862528) Getting domain xml...
	I0410 22:37:55.857765   54076 main.go:141] libmachine: (old-k8s-version-862528) Creating domain...
	I0410 22:37:57.438847   54076 main.go:141] libmachine: (old-k8s-version-862528) Waiting to get IP...
	I0410 22:37:57.439790   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:37:57.440310   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:37:57.440350   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:37:57.440271   54099 retry.go:31] will retry after 189.892953ms: waiting for machine to come up
	I0410 22:37:57.632281   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:37:57.632930   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:37:57.632962   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:37:57.632868   54099 retry.go:31] will retry after 358.243173ms: waiting for machine to come up
	I0410 22:37:57.993166   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:37:57.993775   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:37:57.993808   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:37:57.993728   54099 retry.go:31] will retry after 430.103842ms: waiting for machine to come up
	I0410 22:37:58.425217   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:37:58.425747   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:37:58.425776   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:37:58.425704   54099 retry.go:31] will retry after 381.701747ms: waiting for machine to come up
	I0410 22:37:58.809687   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:37:58.810235   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:37:58.810284   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:37:58.810187   54099 retry.go:31] will retry after 729.63661ms: waiting for machine to come up
	I0410 22:37:59.541214   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:37:59.541886   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:37:59.541910   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:37:59.541826   54099 retry.go:31] will retry after 595.400924ms: waiting for machine to come up
	I0410 22:38:00.138854   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:00.139455   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:38:00.139485   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:38:00.139408   54099 retry.go:31] will retry after 862.803885ms: waiting for machine to come up
	I0410 22:38:01.003955   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:01.004455   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:38:01.004486   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:38:01.004387   54099 retry.go:31] will retry after 1.200369945s: waiting for machine to come up
	I0410 22:38:02.206475   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:02.207101   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:38:02.207130   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:38:02.207048   54099 retry.go:31] will retry after 1.798287822s: waiting for machine to come up
	I0410 22:38:04.008078   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:04.008619   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:38:04.008655   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:38:04.008577   54099 retry.go:31] will retry after 1.873858055s: waiting for machine to come up
	I0410 22:38:05.883699   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:05.884297   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:38:05.884330   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:38:05.884244   54099 retry.go:31] will retry after 1.910046569s: waiting for machine to come up
	I0410 22:38:07.795629   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:07.796144   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:38:07.796173   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:38:07.796097   54099 retry.go:31] will retry after 3.419239967s: waiting for machine to come up
	I0410 22:38:11.216721   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:11.217205   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:38:11.217223   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:38:11.217182   54099 retry.go:31] will retry after 3.680922946s: waiting for machine to come up
	I0410 22:38:14.901577   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:14.902131   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:38:14.902170   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:38:14.902093   54099 retry.go:31] will retry after 3.690544208s: waiting for machine to come up
	I0410 22:38:18.594497   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:18.595018   54076 main.go:141] libmachine: (old-k8s-version-862528) Found IP for machine: 192.168.61.178
	I0410 22:38:18.595051   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has current primary IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:18.595061   54076 main.go:141] libmachine: (old-k8s-version-862528) Reserving static IP address...
	I0410 22:38:18.595409   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-862528", mac: "52:54:00:d0:b7:c9", ip: "192.168.61.178"} in network mk-old-k8s-version-862528
	I0410 22:38:18.674426   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | Getting to WaitForSSH function...
	I0410 22:38:18.674461   54076 main.go:141] libmachine: (old-k8s-version-862528) Reserved static IP address: 192.168.61.178
	I0410 22:38:18.674510   54076 main.go:141] libmachine: (old-k8s-version-862528) Waiting for SSH to be available...
	I0410 22:38:18.677208   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:18.677619   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:38:11 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:38:18.677652   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:18.677832   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | Using SSH client type: external
	I0410 22:38:18.677855   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | Using SSH private key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa (-rw-------)
	I0410 22:38:18.677883   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.178 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0410 22:38:18.677904   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | About to run SSH command:
	I0410 22:38:18.677917   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | exit 0
	I0410 22:38:18.801226   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | SSH cmd err, output: <nil>: 
	I0410 22:38:18.801539   54076 main.go:141] libmachine: (old-k8s-version-862528) KVM machine creation complete!
	I0410 22:38:18.801826   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetConfigRaw
	I0410 22:38:18.802414   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:38:18.802648   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:38:18.802812   54076 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0410 22:38:18.802825   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetState
	I0410 22:38:18.804274   54076 main.go:141] libmachine: Detecting operating system of created instance...
	I0410 22:38:18.804288   54076 main.go:141] libmachine: Waiting for SSH to be available...
	I0410 22:38:18.804294   54076 main.go:141] libmachine: Getting to WaitForSSH function...
	I0410 22:38:18.804300   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:38:18.807025   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:18.807426   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:38:11 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:38:18.807460   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:18.807600   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:38:18.807800   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:38:18.807994   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:38:18.808150   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:38:18.808331   54076 main.go:141] libmachine: Using SSH client type: native
	I0410 22:38:18.808604   54076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:38:18.808621   54076 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0410 22:38:18.912253   54076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:38:18.912284   54076 main.go:141] libmachine: Detecting the provisioner...
	I0410 22:38:18.912292   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:38:18.915541   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:18.916012   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:38:11 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:38:18.916043   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:18.916290   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:38:18.916614   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:38:18.916907   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:38:18.917098   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:38:18.917311   54076 main.go:141] libmachine: Using SSH client type: native
	I0410 22:38:18.917560   54076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:38:18.917577   54076 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0410 22:38:19.021524   54076 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0410 22:38:19.021616   54076 main.go:141] libmachine: found compatible host: buildroot
	I0410 22:38:19.021626   54076 main.go:141] libmachine: Provisioning with buildroot...
	I0410 22:38:19.021638   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetMachineName
	I0410 22:38:19.021896   54076 buildroot.go:166] provisioning hostname "old-k8s-version-862528"
	I0410 22:38:19.021920   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetMachineName
	I0410 22:38:19.022210   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:38:19.025409   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:19.025846   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:38:11 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:38:19.025880   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:19.026059   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:38:19.026251   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:38:19.026407   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:38:19.026549   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:38:19.026787   54076 main.go:141] libmachine: Using SSH client type: native
	I0410 22:38:19.026985   54076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:38:19.027004   54076 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-862528 && echo "old-k8s-version-862528" | sudo tee /etc/hostname
	I0410 22:38:19.144577   54076 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-862528
	
	I0410 22:38:19.144614   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:38:19.147460   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:19.147826   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:38:11 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:38:19.147858   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:19.148005   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:38:19.148221   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:38:19.148391   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:38:19.148607   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:38:19.148756   54076 main.go:141] libmachine: Using SSH client type: native
	I0410 22:38:19.148943   54076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:38:19.148969   54076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-862528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-862528/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-862528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:38:19.262704   54076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:38:19.262755   54076 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:38:19.262783   54076 buildroot.go:174] setting up certificates
	I0410 22:38:19.262794   54076 provision.go:84] configureAuth start
	I0410 22:38:19.262803   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetMachineName
	I0410 22:38:19.263086   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetIP
	I0410 22:38:19.265890   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:19.266290   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:38:11 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:38:19.266321   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:19.266461   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:38:19.268540   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:19.268856   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:38:11 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:38:19.268883   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:19.268954   54076 provision.go:143] copyHostCerts
	I0410 22:38:19.269004   54076 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:38:19.269021   54076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:38:19.269077   54076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:38:19.269187   54076 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:38:19.269197   54076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:38:19.269218   54076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:38:19.269277   54076 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:38:19.269286   54076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:38:19.269360   54076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:38:19.269445   54076 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-862528 san=[127.0.0.1 192.168.61.178 localhost minikube old-k8s-version-862528]
	I0410 22:38:19.536953   54076 provision.go:177] copyRemoteCerts
	I0410 22:38:19.537010   54076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:38:19.537035   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:38:19.540218   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:19.540552   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:38:11 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:38:19.540582   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:19.540779   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:38:19.540974   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:38:19.541134   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:38:19.541291   54076 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa Username:docker}
	I0410 22:38:19.623273   54076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:38:19.649632   54076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0410 22:38:19.675483   54076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0410 22:38:19.701836   54076 provision.go:87] duration metric: took 439.032033ms to configureAuth
	I0410 22:38:19.701871   54076 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:38:19.702066   54076 config.go:182] Loaded profile config "old-k8s-version-862528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0410 22:38:19.702140   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:38:19.705304   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:19.705727   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:38:11 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:38:19.705758   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:19.705931   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:38:19.706158   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:38:19.706326   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:38:19.706460   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:38:19.706627   54076 main.go:141] libmachine: Using SSH client type: native
	I0410 22:38:19.706827   54076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:38:19.706856   54076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:38:19.989492   54076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:38:19.989523   54076 main.go:141] libmachine: Checking connection to Docker...
	I0410 22:38:19.989535   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetURL
	I0410 22:38:19.990985   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | Using libvirt version 6000000
	I0410 22:38:19.993875   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:19.994287   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:38:11 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:38:19.994321   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:19.994532   54076 main.go:141] libmachine: Docker is up and running!
	I0410 22:38:19.994549   54076 main.go:141] libmachine: Reticulating splines...
	I0410 22:38:19.994556   54076 client.go:171] duration metric: took 24.779035836s to LocalClient.Create
	I0410 22:38:19.994578   54076 start.go:167] duration metric: took 24.779102182s to libmachine.API.Create "old-k8s-version-862528"
	I0410 22:38:19.994591   54076 start.go:293] postStartSetup for "old-k8s-version-862528" (driver="kvm2")
	I0410 22:38:19.994605   54076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:38:19.994627   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:38:19.994905   54076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:38:19.994929   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:38:19.997421   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:19.997739   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:38:11 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:38:19.997757   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:19.997985   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:38:19.998193   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:38:19.998367   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:38:19.998511   54076 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa Username:docker}
	I0410 22:38:20.079865   54076 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:38:20.085089   54076 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:38:20.085119   54076 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:38:20.085182   54076 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:38:20.085257   54076 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:38:20.085360   54076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:38:20.095712   54076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:38:20.128057   54076 start.go:296] duration metric: took 133.448822ms for postStartSetup
	I0410 22:38:20.128124   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetConfigRaw
	I0410 22:38:20.128786   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetIP
	I0410 22:38:20.132366   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:20.132862   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:38:11 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:38:20.132892   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:20.133169   54076 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/config.json ...
	I0410 22:38:20.133421   54076 start.go:128] duration metric: took 24.935983796s to createHost
	I0410 22:38:20.133450   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:38:20.136569   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:20.136911   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:38:11 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:38:20.136959   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:20.137168   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:38:20.137460   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:38:20.137655   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:38:20.137862   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:38:20.138085   54076 main.go:141] libmachine: Using SSH client type: native
	I0410 22:38:20.138322   54076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:38:20.138340   54076 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0410 22:38:20.249801   54076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712788700.230922574
	
	I0410 22:38:20.249824   54076 fix.go:216] guest clock: 1712788700.230922574
	I0410 22:38:20.249846   54076 fix.go:229] Guest: 2024-04-10 22:38:20.230922574 +0000 UTC Remote: 2024-04-10 22:38:20.133435799 +0000 UTC m=+25.060259928 (delta=97.486775ms)
	I0410 22:38:20.249872   54076 fix.go:200] guest clock delta is within tolerance: 97.486775ms
	I0410 22:38:20.249878   54076 start.go:83] releasing machines lock for "old-k8s-version-862528", held for 25.052568708s
	I0410 22:38:20.249904   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:38:20.250201   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetIP
	I0410 22:38:20.253492   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:20.253881   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:38:11 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:38:20.253912   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:20.254125   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:38:20.254725   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:38:20.254905   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:38:20.255008   54076 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:38:20.255060   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:38:20.255123   54076 ssh_runner.go:195] Run: cat /version.json
	I0410 22:38:20.255161   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:38:20.258133   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:20.258212   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:20.258571   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:38:11 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:38:20.258612   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:38:11 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:38:20.258651   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:20.258669   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:20.258859   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:38:20.258912   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:38:20.259062   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:38:20.259139   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:38:20.259233   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:38:20.259320   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:38:20.259399   54076 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa Username:docker}
	I0410 22:38:20.259493   54076 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa Username:docker}
	I0410 22:38:20.379056   54076 ssh_runner.go:195] Run: systemctl --version
	I0410 22:38:20.389333   54076 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:38:20.571830   54076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 22:38:20.580163   54076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:38:20.580236   54076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:38:20.597391   54076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0410 22:38:20.597420   54076 start.go:494] detecting cgroup driver to use...
	I0410 22:38:20.597489   54076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:38:20.615564   54076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:38:20.631667   54076 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:38:20.631750   54076 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:38:20.646357   54076 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:38:20.661728   54076 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:38:20.786804   54076 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:38:20.958013   54076 docker.go:233] disabling docker service ...
	I0410 22:38:20.958075   54076 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:38:20.981620   54076 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:38:20.997438   54076 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:38:21.151965   54076 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:38:21.309613   54076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:38:21.324908   54076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:38:21.347527   54076 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0410 22:38:21.347589   54076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:38:21.359177   54076 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:38:21.359266   54076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:38:21.374018   54076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:38:21.387445   54076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:38:21.400067   54076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:38:21.415080   54076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:38:21.426673   54076 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0410 22:38:21.426745   54076 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0410 22:38:21.442713   54076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:38:21.453705   54076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:38:21.611568   54076 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:38:21.778886   54076 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 22:38:21.778952   54076 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 22:38:21.784578   54076 start.go:562] Will wait 60s for crictl version
	I0410 22:38:21.784653   54076 ssh_runner.go:195] Run: which crictl
	I0410 22:38:21.790051   54076 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 22:38:21.831257   54076 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 22:38:21.831338   54076 ssh_runner.go:195] Run: crio --version
	I0410 22:38:21.864924   54076 ssh_runner.go:195] Run: crio --version
	I0410 22:38:21.902810   54076 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0410 22:38:21.904384   54076 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetIP
	I0410 22:38:21.907659   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:21.908239   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:38:11 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:38:21.908268   54076 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:38:21.908591   54076 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0410 22:38:21.913572   54076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:38:21.928659   54076 kubeadm.go:877] updating cluster {Name:old-k8s-version-862528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-862528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.178 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 22:38:21.928832   54076 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0410 22:38:21.928891   54076 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:38:21.982315   54076 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0410 22:38:21.982412   54076 ssh_runner.go:195] Run: which lz4
	I0410 22:38:21.988847   54076 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0410 22:38:21.994586   54076 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0410 22:38:21.994634   54076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0410 22:38:23.889193   54076 crio.go:462] duration metric: took 1.900393525s to copy over tarball
	I0410 22:38:23.889273   54076 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0410 22:38:26.601071   54076 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.711755929s)
	I0410 22:38:26.601099   54076 crio.go:469] duration metric: took 2.711876956s to extract the tarball
	I0410 22:38:26.601105   54076 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0410 22:38:26.643993   54076 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:38:26.690553   54076 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0410 22:38:26.690578   54076 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0410 22:38:26.690658   54076 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:38:26.690739   54076 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0410 22:38:26.690754   54076 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0410 22:38:26.690658   54076 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:38:26.690669   54076 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:38:26.690949   54076 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0410 22:38:26.690710   54076 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:38:26.691259   54076 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:38:26.692443   54076 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0410 22:38:26.692469   54076 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0410 22:38:26.692476   54076 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0410 22:38:26.692484   54076 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:38:26.692486   54076 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:38:26.692533   54076 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:38:26.692547   54076 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:38:26.692606   54076 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:38:26.893740   54076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0410 22:38:26.894378   54076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:38:26.895033   54076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:38:26.896433   54076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0410 22:38:26.917503   54076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0410 22:38:26.924799   54076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:38:26.932530   54076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:38:27.048366   54076 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0410 22:38:27.048427   54076 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0410 22:38:27.048462   54076 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0410 22:38:27.048497   54076 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:38:27.048546   54076 ssh_runner.go:195] Run: which crictl
	I0410 22:38:27.048474   54076 ssh_runner.go:195] Run: which crictl
	I0410 22:38:27.054324   54076 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0410 22:38:27.054371   54076 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:38:27.054420   54076 ssh_runner.go:195] Run: which crictl
	I0410 22:38:27.109039   54076 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0410 22:38:27.109071   54076 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0410 22:38:27.109084   54076 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0410 22:38:27.109099   54076 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0410 22:38:27.109102   54076 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0410 22:38:27.109128   54076 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:38:27.109133   54076 ssh_runner.go:195] Run: which crictl
	I0410 22:38:27.109160   54076 ssh_runner.go:195] Run: which crictl
	I0410 22:38:27.109131   54076 ssh_runner.go:195] Run: which crictl
	I0410 22:38:27.114975   54076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0410 22:38:27.115023   54076 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0410 22:38:27.115043   54076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:38:27.115054   54076 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:38:27.115072   54076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:38:27.115106   54076 ssh_runner.go:195] Run: which crictl
	I0410 22:38:27.121085   54076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0410 22:38:27.121162   54076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0410 22:38:27.122572   54076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:38:27.136897   54076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:38:27.258559   54076 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0410 22:38:27.287188   54076 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0410 22:38:27.287239   54076 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0410 22:38:27.293816   54076 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0410 22:38:27.293907   54076 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0410 22:38:27.299127   54076 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0410 22:38:27.302437   54076 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0410 22:38:28.047648   54076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:38:28.190351   54076 cache_images.go:92] duration metric: took 1.499760847s to LoadCachedImages
	W0410 22:38:28.190462   54076 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0410 22:38:28.190484   54076 kubeadm.go:928] updating node { 192.168.61.178 8443 v1.20.0 crio true true} ...
	I0410 22:38:28.190612   54076 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-862528 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.178
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-862528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 22:38:28.190700   54076 ssh_runner.go:195] Run: crio config
	I0410 22:38:28.252638   54076 cni.go:84] Creating CNI manager for ""
	I0410 22:38:28.252662   54076 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:38:28.252676   54076 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 22:38:28.252693   54076 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.178 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-862528 NodeName:old-k8s-version-862528 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.178"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.178 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0410 22:38:28.252820   54076 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.178
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-862528"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.178
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.178"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 22:38:28.252877   54076 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0410 22:38:28.263896   54076 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 22:38:28.263972   54076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 22:38:28.278532   54076 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0410 22:38:28.303419   54076 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0410 22:38:28.323926   54076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0410 22:38:28.345430   54076 ssh_runner.go:195] Run: grep 192.168.61.178	control-plane.minikube.internal$ /etc/hosts
	I0410 22:38:28.350087   54076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.178	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:38:28.364298   54076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:38:28.490127   54076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:38:28.508394   54076 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528 for IP: 192.168.61.178
	I0410 22:38:28.508435   54076 certs.go:194] generating shared ca certs ...
	I0410 22:38:28.508456   54076 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:38:28.508624   54076 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 22:38:28.508680   54076 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 22:38:28.508693   54076 certs.go:256] generating profile certs ...
	I0410 22:38:28.508754   54076 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/client.key
	I0410 22:38:28.508772   54076 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/client.crt with IP's: []
	I0410 22:38:28.603375   54076 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/client.crt ...
	I0410 22:38:28.603403   54076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/client.crt: {Name:mk6ad76bb551be0d771b2f703efa106dc20ea744 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:38:28.603581   54076 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/client.key ...
	I0410 22:38:28.603599   54076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/client.key: {Name:mk08bec3b5e90ea533a9b55b1f1716f490fd4dcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:38:28.603683   54076 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.key.a46c310c
	I0410 22:38:28.603698   54076 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.crt.a46c310c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.178]
	I0410 22:38:29.027490   54076 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.crt.a46c310c ...
	I0410 22:38:29.027528   54076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.crt.a46c310c: {Name:mk227923e4a364963acde43e48095c4dcb73a3f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:38:29.027687   54076 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.key.a46c310c ...
	I0410 22:38:29.027704   54076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.key.a46c310c: {Name:mk700582629292aecd3bee7cd4582834a9a976d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:38:29.027771   54076 certs.go:381] copying /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.crt.a46c310c -> /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.crt
	I0410 22:38:29.027870   54076 certs.go:385] copying /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.key.a46c310c -> /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.key
	I0410 22:38:29.027930   54076 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/proxy-client.key
	I0410 22:38:29.027946   54076 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/proxy-client.crt with IP's: []
	I0410 22:38:29.194777   54076 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/proxy-client.crt ...
	I0410 22:38:29.194808   54076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/proxy-client.crt: {Name:mka09de188b61993237834cb2041e559b62178a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:38:29.194973   54076 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/proxy-client.key ...
	I0410 22:38:29.194986   54076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/proxy-client.key: {Name:mk7349224657258501fd06afee89bf2af50d5afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:38:29.195144   54076 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 22:38:29.195182   54076 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 22:38:29.195193   54076 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 22:38:29.195216   54076 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 22:38:29.195239   54076 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 22:38:29.195263   54076 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 22:38:29.195297   54076 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:38:29.195883   54076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 22:38:29.229299   54076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 22:38:29.261510   54076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 22:38:29.294222   54076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 22:38:29.335035   54076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0410 22:38:29.436746   54076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0410 22:38:29.467822   54076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 22:38:29.495555   54076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0410 22:38:29.522240   54076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 22:38:29.550133   54076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 22:38:29.578079   54076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 22:38:29.608278   54076 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 22:38:29.631635   54076 ssh_runner.go:195] Run: openssl version
	I0410 22:38:29.637864   54076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 22:38:29.651604   54076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:38:29.656501   54076 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:38:29.656553   54076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:38:29.663045   54076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 22:38:29.675885   54076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 22:38:29.688249   54076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 22:38:29.693520   54076 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:38:29.693597   54076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 22:38:29.699801   54076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 22:38:29.711770   54076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 22:38:29.723255   54076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 22:38:29.728068   54076 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:38:29.728130   54076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 22:38:29.734410   54076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 22:38:29.746623   54076 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:38:29.750868   54076 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0410 22:38:29.750929   54076 kubeadm.go:391] StartCluster: {Name:old-k8s-version-862528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-862528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.178 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:38:29.751023   54076 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 22:38:29.751088   54076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:38:29.797461   54076 cri.go:89] found id: ""
	I0410 22:38:29.797552   54076 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0410 22:38:29.808851   54076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:38:29.819381   54076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:38:29.830303   54076 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:38:29.830326   54076 kubeadm.go:156] found existing configuration files:
	
	I0410 22:38:29.830380   54076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:38:29.840439   54076 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:38:29.840491   54076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:38:29.851554   54076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:38:29.862184   54076 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:38:29.862274   54076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:38:29.873033   54076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:38:29.883226   54076 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:38:29.883298   54076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:38:29.893918   54076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:38:29.904038   54076 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:38:29.904108   54076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:38:29.914512   54076 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 22:38:30.035859   54076 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0410 22:38:30.035922   54076 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 22:38:30.230966   54076 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 22:38:30.231145   54076 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 22:38:30.231277   54076 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 22:38:30.483348   54076 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 22:38:30.528024   54076 out.go:204]   - Generating certificates and keys ...
	I0410 22:38:30.528167   54076 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 22:38:30.528304   54076 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 22:38:30.761404   54076 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0410 22:38:30.887439   54076 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0410 22:38:31.096114   54076 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0410 22:38:31.251123   54076 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0410 22:38:31.365518   54076 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0410 22:38:31.365739   54076 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-862528] and IPs [192.168.61.178 127.0.0.1 ::1]
	I0410 22:38:31.517073   54076 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0410 22:38:31.517297   54076 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-862528] and IPs [192.168.61.178 127.0.0.1 ::1]
	I0410 22:38:31.730663   54076 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0410 22:38:31.857826   54076 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0410 22:38:32.083631   54076 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0410 22:38:32.083817   54076 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 22:38:32.252665   54076 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 22:38:32.489627   54076 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 22:38:32.687486   54076 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 22:38:33.031038   54076 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 22:38:33.054184   54076 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 22:38:33.055167   54076 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 22:38:33.055244   54076 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 22:38:33.213337   54076 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 22:38:33.215612   54076 out.go:204]   - Booting up control plane ...
	I0410 22:38:33.215795   54076 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 22:38:33.217114   54076 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 22:38:33.219624   54076 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 22:38:33.220709   54076 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 22:38:33.226630   54076 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0410 22:39:13.224347   54076 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0410 22:39:13.224911   54076 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:39:13.225306   54076 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:39:18.226004   54076 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:39:18.226216   54076 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:39:28.226909   54076 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:39:28.227132   54076 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:39:48.228185   54076 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:39:48.228465   54076 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:40:28.228301   54076 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:40:28.228603   54076 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:40:28.228618   54076 kubeadm.go:309] 
	I0410 22:40:28.228660   54076 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0410 22:40:28.228711   54076 kubeadm.go:309] 		timed out waiting for the condition
	I0410 22:40:28.228722   54076 kubeadm.go:309] 
	I0410 22:40:28.228773   54076 kubeadm.go:309] 	This error is likely caused by:
	I0410 22:40:28.228821   54076 kubeadm.go:309] 		- The kubelet is not running
	I0410 22:40:28.228926   54076 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0410 22:40:28.228935   54076 kubeadm.go:309] 
	I0410 22:40:28.229059   54076 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0410 22:40:28.229114   54076 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0410 22:40:28.229162   54076 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0410 22:40:28.229173   54076 kubeadm.go:309] 
	I0410 22:40:28.229321   54076 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0410 22:40:28.229451   54076 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0410 22:40:28.229481   54076 kubeadm.go:309] 
	I0410 22:40:28.229653   54076 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0410 22:40:28.229796   54076 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0410 22:40:28.229897   54076 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0410 22:40:28.229998   54076 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0410 22:40:28.230014   54076 kubeadm.go:309] 
	I0410 22:40:28.231380   54076 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 22:40:28.231501   54076 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0410 22:40:28.231585   54076 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0410 22:40:28.231759   54076 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-862528] and IPs [192.168.61.178 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-862528] and IPs [192.168.61.178 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-862528] and IPs [192.168.61.178 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-862528] and IPs [192.168.61.178 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0410 22:40:28.231855   54076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0410 22:40:28.753745   54076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:40:28.769046   54076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:40:28.781221   54076 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:40:28.781249   54076 kubeadm.go:156] found existing configuration files:
	
	I0410 22:40:28.781302   54076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:40:28.791378   54076 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:40:28.791451   54076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:40:28.801651   54076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:40:28.812353   54076 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:40:28.812448   54076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:40:28.825269   54076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:40:28.835266   54076 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:40:28.835333   54076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:40:28.845624   54076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:40:28.855484   54076 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:40:28.855554   54076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:40:28.869399   54076 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 22:40:28.947168   54076 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0410 22:40:28.947384   54076 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 22:40:29.094279   54076 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 22:40:29.094482   54076 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 22:40:29.094622   54076 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 22:40:29.333525   54076 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 22:40:29.335718   54076 out.go:204]   - Generating certificates and keys ...
	I0410 22:40:29.335822   54076 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 22:40:29.335953   54076 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 22:40:29.336085   54076 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0410 22:40:29.336183   54076 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0410 22:40:29.336311   54076 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0410 22:40:29.336421   54076 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0410 22:40:29.336534   54076 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0410 22:40:29.336651   54076 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0410 22:40:29.336769   54076 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0410 22:40:29.336890   54076 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0410 22:40:29.336956   54076 kubeadm.go:309] [certs] Using the existing "sa" key
	I0410 22:40:29.337054   54076 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 22:40:29.619559   54076 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 22:40:29.856892   54076 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 22:40:29.938429   54076 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 22:40:30.091196   54076 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 22:40:30.111488   54076 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 22:40:30.114592   54076 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 22:40:30.114827   54076 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 22:40:30.300659   54076 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 22:40:30.302189   54076 out.go:204]   - Booting up control plane ...
	I0410 22:40:30.302314   54076 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 22:40:30.315525   54076 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 22:40:30.317073   54076 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 22:40:30.318175   54076 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 22:40:30.322031   54076 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0410 22:41:10.328981   54076 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0410 22:41:10.329439   54076 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:41:10.329712   54076 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:41:15.330617   54076 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:41:15.330822   54076 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:41:25.331339   54076 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:41:25.331598   54076 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:41:45.332473   54076 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:41:45.332689   54076 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:42:25.332783   54076 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:42:25.333027   54076 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:42:25.333045   54076 kubeadm.go:309] 
	I0410 22:42:25.333097   54076 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0410 22:42:25.333152   54076 kubeadm.go:309] 		timed out waiting for the condition
	I0410 22:42:25.333164   54076 kubeadm.go:309] 
	I0410 22:42:25.333209   54076 kubeadm.go:309] 	This error is likely caused by:
	I0410 22:42:25.333256   54076 kubeadm.go:309] 		- The kubelet is not running
	I0410 22:42:25.333393   54076 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0410 22:42:25.333406   54076 kubeadm.go:309] 
	I0410 22:42:25.333539   54076 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0410 22:42:25.333589   54076 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0410 22:42:25.333634   54076 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0410 22:42:25.333645   54076 kubeadm.go:309] 
	I0410 22:42:25.333770   54076 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0410 22:42:25.333877   54076 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0410 22:42:25.333916   54076 kubeadm.go:309] 
	I0410 22:42:25.334061   54076 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0410 22:42:25.334167   54076 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0410 22:42:25.334267   54076 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0410 22:42:25.334363   54076 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0410 22:42:25.334376   54076 kubeadm.go:309] 
	I0410 22:42:25.336845   54076 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 22:42:25.336962   54076 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0410 22:42:25.337051   54076 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0410 22:42:25.337121   54076 kubeadm.go:393] duration metric: took 3m55.586206572s to StartCluster
	I0410 22:42:25.337169   54076 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:42:25.337227   54076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:42:25.395128   54076 cri.go:89] found id: ""
	I0410 22:42:25.395158   54076 logs.go:276] 0 containers: []
	W0410 22:42:25.395169   54076 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:42:25.395176   54076 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:42:25.395233   54076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:42:25.442326   54076 cri.go:89] found id: ""
	I0410 22:42:25.442358   54076 logs.go:276] 0 containers: []
	W0410 22:42:25.442367   54076 logs.go:278] No container was found matching "etcd"
	I0410 22:42:25.442374   54076 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:42:25.442439   54076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:42:25.494886   54076 cri.go:89] found id: ""
	I0410 22:42:25.494916   54076 logs.go:276] 0 containers: []
	W0410 22:42:25.494924   54076 logs.go:278] No container was found matching "coredns"
	I0410 22:42:25.494930   54076 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:42:25.494971   54076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:42:25.546191   54076 cri.go:89] found id: ""
	I0410 22:42:25.546217   54076 logs.go:276] 0 containers: []
	W0410 22:42:25.546226   54076 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:42:25.546233   54076 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:42:25.546299   54076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:42:25.597586   54076 cri.go:89] found id: ""
	I0410 22:42:25.597612   54076 logs.go:276] 0 containers: []
	W0410 22:42:25.597622   54076 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:42:25.597629   54076 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:42:25.597691   54076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:42:25.638970   54076 cri.go:89] found id: ""
	I0410 22:42:25.638999   54076 logs.go:276] 0 containers: []
	W0410 22:42:25.639010   54076 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:42:25.639019   54076 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:42:25.639083   54076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:42:25.681477   54076 cri.go:89] found id: ""
	I0410 22:42:25.681508   54076 logs.go:276] 0 containers: []
	W0410 22:42:25.681518   54076 logs.go:278] No container was found matching "kindnet"
	I0410 22:42:25.681530   54076 logs.go:123] Gathering logs for kubelet ...
	I0410 22:42:25.681546   54076 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:42:25.746030   54076 logs.go:123] Gathering logs for dmesg ...
	I0410 22:42:25.746065   54076 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:42:25.766216   54076 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:42:25.766245   54076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:42:25.896111   54076 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:42:25.896131   54076 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:42:25.896146   54076 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:42:26.010193   54076 logs.go:123] Gathering logs for container status ...
	I0410 22:42:26.010227   54076 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0410 22:42:26.056242   54076 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0410 22:42:26.056297   54076 out.go:239] * 
	* 
	W0410 22:42:26.056358   54076 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0410 22:42:26.056388   54076 out.go:239] * 
	* 
	W0410 22:42:26.057381   54076 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0410 22:42:26.060736   54076 out.go:177] 
	W0410 22:42:26.062296   54076 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0410 22:42:26.062352   54076 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0410 22:42:26.062380   54076 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0410 22:42:26.064057   54076 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-862528 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-862528 -n old-k8s-version-862528
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-862528 -n old-k8s-version-862528: exit status 6 (242.843164ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0410 22:42:26.339347   56613 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-862528" does not appear in /home/jenkins/minikube-integration/18610-5679/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-862528" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (271.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-646133 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-646133 --alsologtostderr -v=3: exit status 82 (2m0.592357417s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-646133"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0410 22:40:38.362368   55519 out.go:291] Setting OutFile to fd 1 ...
	I0410 22:40:38.362649   55519 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:40:38.362661   55519 out.go:304] Setting ErrFile to fd 2...
	I0410 22:40:38.362667   55519 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:40:38.363005   55519 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 22:40:38.363354   55519 out.go:298] Setting JSON to false
	I0410 22:40:38.363457   55519 mustload.go:65] Loading cluster: no-preload-646133
	I0410 22:40:38.363823   55519 config.go:182] Loaded profile config "no-preload-646133": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.1
	I0410 22:40:38.363894   55519 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/config.json ...
	I0410 22:40:38.364066   55519 mustload.go:65] Loading cluster: no-preload-646133
	I0410 22:40:38.364170   55519 config.go:182] Loaded profile config "no-preload-646133": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.1
	I0410 22:40:38.364194   55519 stop.go:39] StopHost: no-preload-646133
	I0410 22:40:38.364689   55519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:40:38.364743   55519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:40:38.379799   55519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41509
	I0410 22:40:38.380240   55519 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:40:38.380968   55519 main.go:141] libmachine: Using API Version  1
	I0410 22:40:38.381004   55519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:40:38.381428   55519 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:40:38.383982   55519 out.go:177] * Stopping node "no-preload-646133"  ...
	I0410 22:40:38.385913   55519 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0410 22:40:38.385960   55519 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:40:38.386211   55519 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0410 22:40:38.386244   55519 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:40:38.389474   55519 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:40:38.389970   55519 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:38:36 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:40:38.390001   55519 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:40:38.390199   55519 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:40:38.390422   55519 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:40:38.390601   55519 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:40:38.390773   55519 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:40:38.511612   55519 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0410 22:40:38.577929   55519 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0410 22:40:38.658969   55519 main.go:141] libmachine: Stopping "no-preload-646133"...
	I0410 22:40:38.659003   55519 main.go:141] libmachine: (no-preload-646133) Calling .GetState
	I0410 22:40:38.660774   55519 main.go:141] libmachine: (no-preload-646133) Calling .Stop
	I0410 22:40:38.664905   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 0/120
	I0410 22:40:39.667313   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 1/120
	I0410 22:40:40.668929   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 2/120
	I0410 22:40:41.671284   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 3/120
	I0410 22:40:42.672651   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 4/120
	I0410 22:40:43.674715   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 5/120
	I0410 22:40:44.676449   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 6/120
	I0410 22:40:45.677953   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 7/120
	I0410 22:40:46.679520   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 8/120
	I0410 22:40:47.680964   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 9/120
	I0410 22:40:48.683277   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 10/120
	I0410 22:40:49.684792   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 11/120
	I0410 22:40:50.687125   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 12/120
	I0410 22:40:51.688824   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 13/120
	I0410 22:40:52.691252   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 14/120
	I0410 22:40:53.693019   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 15/120
	I0410 22:40:54.694589   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 16/120
	I0410 22:40:55.696284   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 17/120
	I0410 22:40:56.697667   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 18/120
	I0410 22:40:57.699219   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 19/120
	I0410 22:40:58.701348   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 20/120
	I0410 22:40:59.703472   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 21/120
	I0410 22:41:00.705927   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 22/120
	I0410 22:41:01.707545   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 23/120
	I0410 22:41:02.709547   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 24/120
	I0410 22:41:03.711544   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 25/120
	I0410 22:41:04.713256   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 26/120
	I0410 22:41:05.715104   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 27/120
	I0410 22:41:06.716759   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 28/120
	I0410 22:41:07.719375   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 29/120
	I0410 22:41:08.721990   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 30/120
	I0410 22:41:09.723559   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 31/120
	I0410 22:41:10.724736   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 32/120
	I0410 22:41:11.727064   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 33/120
	I0410 22:41:12.728633   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 34/120
	I0410 22:41:13.731114   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 35/120
	I0410 22:41:14.732794   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 36/120
	I0410 22:41:15.735511   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 37/120
	I0410 22:41:16.737055   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 38/120
	I0410 22:41:17.739524   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 39/120
	I0410 22:41:18.742037   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 40/120
	I0410 22:41:19.743727   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 41/120
	I0410 22:41:20.745220   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 42/120
	I0410 22:41:21.747516   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 43/120
	I0410 22:41:22.749062   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 44/120
	I0410 22:41:23.751311   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 45/120
	I0410 22:41:24.752912   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 46/120
	I0410 22:41:25.754686   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 47/120
	I0410 22:41:26.756456   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 48/120
	I0410 22:41:27.758192   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 49/120
	I0410 22:41:28.760487   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 50/120
	I0410 22:41:29.762141   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 51/120
	I0410 22:41:30.763729   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 52/120
	I0410 22:41:31.765503   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 53/120
	I0410 22:41:32.767000   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 54/120
	I0410 22:41:33.769095   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 55/120
	I0410 22:41:34.770855   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 56/120
	I0410 22:41:35.772261   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 57/120
	I0410 22:41:36.773766   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 58/120
	I0410 22:41:37.775208   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 59/120
	I0410 22:41:38.777164   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 60/120
	I0410 22:41:39.778946   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 61/120
	I0410 22:41:40.780306   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 62/120
	I0410 22:41:41.782955   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 63/120
	I0410 22:41:42.784425   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 64/120
	I0410 22:41:43.786756   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 65/120
	I0410 22:41:44.789220   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 66/120
	I0410 22:41:45.791157   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 67/120
	I0410 22:41:46.792851   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 68/120
	I0410 22:41:47.795463   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 69/120
	I0410 22:41:48.797984   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 70/120
	I0410 22:41:49.799495   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 71/120
	I0410 22:41:50.800935   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 72/120
	I0410 22:41:51.803149   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 73/120
	I0410 22:41:52.804783   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 74/120
	I0410 22:41:53.807025   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 75/120
	I0410 22:41:54.808507   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 76/120
	I0410 22:41:55.810373   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 77/120
	I0410 22:41:56.811834   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 78/120
	I0410 22:41:57.813413   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 79/120
	I0410 22:41:58.815968   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 80/120
	I0410 22:41:59.817660   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 81/120
	I0410 22:42:00.819171   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 82/120
	I0410 22:42:01.820417   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 83/120
	I0410 22:42:02.821608   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 84/120
	I0410 22:42:03.823621   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 85/120
	I0410 22:42:04.825294   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 86/120
	I0410 22:42:05.827101   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 87/120
	I0410 22:42:06.828940   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 88/120
	I0410 22:42:07.830522   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 89/120
	I0410 22:42:08.832794   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 90/120
	I0410 22:42:09.835198   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 91/120
	I0410 22:42:10.837149   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 92/120
	I0410 22:42:11.838570   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 93/120
	I0410 22:42:12.840000   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 94/120
	I0410 22:42:13.842066   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 95/120
	I0410 22:42:14.843384   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 96/120
	I0410 22:42:15.844914   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 97/120
	I0410 22:42:16.846860   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 98/120
	I0410 22:42:17.849172   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 99/120
	I0410 22:42:18.850792   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 100/120
	I0410 22:42:19.852459   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 101/120
	I0410 22:42:20.853943   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 102/120
	I0410 22:42:21.855802   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 103/120
	I0410 22:42:22.857291   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 104/120
	I0410 22:42:23.858932   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 105/120
	I0410 22:42:24.861417   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 106/120
	I0410 22:42:25.863172   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 107/120
	I0410 22:42:26.864841   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 108/120
	I0410 22:42:27.866363   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 109/120
	I0410 22:42:28.868590   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 110/120
	I0410 22:42:29.870495   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 111/120
	I0410 22:42:30.871933   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 112/120
	I0410 22:42:31.873648   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 113/120
	I0410 22:42:32.875006   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 114/120
	I0410 22:42:33.876897   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 115/120
	I0410 22:42:34.879101   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 116/120
	I0410 22:42:35.880838   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 117/120
	I0410 22:42:36.883071   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 118/120
	I0410 22:42:37.884486   55519 main.go:141] libmachine: (no-preload-646133) Waiting for machine to stop 119/120
	I0410 22:42:38.884928   55519 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0410 22:42:38.884990   55519 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0410 22:42:38.887266   55519 out.go:177] 
	W0410 22:42:38.888880   55519 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0410 22:42:38.888903   55519 out.go:239] * 
	* 
	W0410 22:42:38.891479   55519 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0410 22:42:38.893240   55519 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-646133 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-646133 -n no-preload-646133
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-646133 -n no-preload-646133: exit status 3 (18.460245361s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0410 22:42:57.356669   56795 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.17:22: connect: no route to host
	E0410 22:42:57.356698   56795 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.17:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-646133" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-706500 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-706500 --alsologtostderr -v=3: exit status 82 (2m0.598099662s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-706500"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0410 22:42:25.490432   56595 out.go:291] Setting OutFile to fd 1 ...
	I0410 22:42:25.490649   56595 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:42:25.490663   56595 out.go:304] Setting ErrFile to fd 2...
	I0410 22:42:25.490670   56595 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:42:25.490956   56595 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 22:42:25.491255   56595 out.go:298] Setting JSON to false
	I0410 22:42:25.491331   56595 mustload.go:65] Loading cluster: embed-certs-706500
	I0410 22:42:25.491745   56595 config.go:182] Loaded profile config "embed-certs-706500": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:42:25.491811   56595 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/config.json ...
	I0410 22:42:25.491988   56595 mustload.go:65] Loading cluster: embed-certs-706500
	I0410 22:42:25.492089   56595 config.go:182] Loaded profile config "embed-certs-706500": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:42:25.492113   56595 stop.go:39] StopHost: embed-certs-706500
	I0410 22:42:25.492518   56595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:42:25.492567   56595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:42:25.508075   56595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34497
	I0410 22:42:25.508646   56595 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:42:25.509230   56595 main.go:141] libmachine: Using API Version  1
	I0410 22:42:25.509267   56595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:42:25.509780   56595 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:42:25.513065   56595 out.go:177] * Stopping node "embed-certs-706500"  ...
	I0410 22:42:25.514432   56595 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0410 22:42:25.514463   56595 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:42:25.514757   56595 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0410 22:42:25.514785   56595 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:42:25.517825   56595 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:42:25.518285   56595 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:41:29 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:42:25.518317   56595 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:42:25.518568   56595 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:42:25.518765   56595 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:42:25.518928   56595 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:42:25.519202   56595 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:42:25.632512   56595 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0410 22:42:25.696846   56595 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0410 22:42:25.788115   56595 main.go:141] libmachine: Stopping "embed-certs-706500"...
	I0410 22:42:25.788146   56595 main.go:141] libmachine: (embed-certs-706500) Calling .GetState
	I0410 22:42:25.790273   56595 main.go:141] libmachine: (embed-certs-706500) Calling .Stop
	I0410 22:42:25.795165   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 0/120
	I0410 22:42:26.796789   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 1/120
	I0410 22:42:27.798480   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 2/120
	I0410 22:42:28.799968   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 3/120
	I0410 22:42:29.801356   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 4/120
	I0410 22:42:30.803542   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 5/120
	I0410 22:42:31.805326   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 6/120
	I0410 22:42:32.807249   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 7/120
	I0410 22:42:33.809005   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 8/120
	I0410 22:42:34.811220   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 9/120
	I0410 22:42:35.812925   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 10/120
	I0410 22:42:36.814372   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 11/120
	I0410 22:42:37.816171   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 12/120
	I0410 22:42:38.817958   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 13/120
	I0410 22:42:39.819364   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 14/120
	I0410 22:42:40.820847   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 15/120
	I0410 22:42:41.822361   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 16/120
	I0410 22:42:42.823928   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 17/120
	I0410 22:42:43.825614   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 18/120
	I0410 22:42:44.827061   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 19/120
	I0410 22:42:45.829372   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 20/120
	I0410 22:42:46.831210   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 21/120
	I0410 22:42:47.832739   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 22/120
	I0410 22:42:48.834254   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 23/120
	I0410 22:42:49.836039   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 24/120
	I0410 22:42:50.837371   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 25/120
	I0410 22:42:51.839227   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 26/120
	I0410 22:42:52.840886   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 27/120
	I0410 22:42:53.842546   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 28/120
	I0410 22:42:54.844332   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 29/120
	I0410 22:42:55.846926   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 30/120
	I0410 22:42:56.848441   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 31/120
	I0410 22:42:57.849935   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 32/120
	I0410 22:42:58.851503   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 33/120
	I0410 22:42:59.853341   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 34/120
	I0410 22:43:00.855688   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 35/120
	I0410 22:43:01.857812   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 36/120
	I0410 22:43:02.859417   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 37/120
	I0410 22:43:03.860888   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 38/120
	I0410 22:43:04.862457   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 39/120
	I0410 22:43:05.864867   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 40/120
	I0410 22:43:06.866688   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 41/120
	I0410 22:43:07.868713   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 42/120
	I0410 22:43:08.871224   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 43/120
	I0410 22:43:09.873108   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 44/120
	I0410 22:43:10.875191   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 45/120
	I0410 22:43:11.877107   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 46/120
	I0410 22:43:12.878642   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 47/120
	I0410 22:43:13.880307   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 48/120
	I0410 22:43:14.881789   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 49/120
	I0410 22:43:15.883295   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 50/120
	I0410 22:43:16.884654   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 51/120
	I0410 22:43:17.886247   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 52/120
	I0410 22:43:18.887887   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 53/120
	I0410 22:43:19.889601   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 54/120
	I0410 22:43:20.891869   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 55/120
	I0410 22:43:21.893433   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 56/120
	I0410 22:43:22.895056   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 57/120
	I0410 22:43:23.896448   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 58/120
	I0410 22:43:24.898275   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 59/120
	I0410 22:43:25.900852   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 60/120
	I0410 22:43:26.903160   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 61/120
	I0410 22:43:27.904466   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 62/120
	I0410 22:43:28.906037   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 63/120
	I0410 22:43:29.907421   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 64/120
	I0410 22:43:30.909181   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 65/120
	I0410 22:43:31.911549   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 66/120
	I0410 22:43:32.913191   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 67/120
	I0410 22:43:33.915255   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 68/120
	I0410 22:43:34.917097   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 69/120
	I0410 22:43:35.918820   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 70/120
	I0410 22:43:36.920775   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 71/120
	I0410 22:43:37.922219   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 72/120
	I0410 22:43:38.924357   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 73/120
	I0410 22:43:39.926091   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 74/120
	I0410 22:43:40.928540   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 75/120
	I0410 22:43:41.931224   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 76/120
	I0410 22:43:42.933152   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 77/120
	I0410 22:43:43.934583   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 78/120
	I0410 22:43:44.936145   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 79/120
	I0410 22:43:45.938723   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 80/120
	I0410 22:43:46.941005   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 81/120
	I0410 22:43:47.942713   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 82/120
	I0410 22:43:48.944891   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 83/120
	I0410 22:43:49.947174   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 84/120
	I0410 22:43:50.949010   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 85/120
	I0410 22:43:51.951220   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 86/120
	I0410 22:43:52.953025   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 87/120
	I0410 22:43:53.954582   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 88/120
	I0410 22:43:54.956082   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 89/120
	I0410 22:43:55.958827   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 90/120
	I0410 22:43:56.960359   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 91/120
	I0410 22:43:57.962084   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 92/120
	I0410 22:43:58.963470   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 93/120
	I0410 22:43:59.965099   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 94/120
	I0410 22:44:00.967176   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 95/120
	I0410 22:44:01.968689   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 96/120
	I0410 22:44:02.970075   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 97/120
	I0410 22:44:03.971315   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 98/120
	I0410 22:44:04.973025   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 99/120
	I0410 22:44:05.974959   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 100/120
	I0410 22:44:06.976262   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 101/120
	I0410 22:44:07.977674   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 102/120
	I0410 22:44:08.978838   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 103/120
	I0410 22:44:09.980636   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 104/120
	I0410 22:44:10.982605   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 105/120
	I0410 22:44:11.984026   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 106/120
	I0410 22:44:12.985889   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 107/120
	I0410 22:44:13.987366   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 108/120
	I0410 22:44:14.988608   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 109/120
	I0410 22:44:15.990843   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 110/120
	I0410 22:44:16.992539   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 111/120
	I0410 22:44:17.994989   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 112/120
	I0410 22:44:18.996663   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 113/120
	I0410 22:44:19.999048   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 114/120
	I0410 22:44:21.001152   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 115/120
	I0410 22:44:22.003028   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 116/120
	I0410 22:44:23.004570   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 117/120
	I0410 22:44:24.006126   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 118/120
	I0410 22:44:25.007599   56595 main.go:141] libmachine: (embed-certs-706500) Waiting for machine to stop 119/120
	I0410 22:44:26.008665   56595 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0410 22:44:26.008726   56595 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0410 22:44:26.010887   56595 out.go:177] 
	W0410 22:44:26.012592   56595 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0410 22:44:26.012613   56595 out.go:239] * 
	* 
	W0410 22:44:26.015176   56595 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0410 22:44:26.016635   56595 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-706500 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-706500 -n embed-certs-706500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-706500 -n embed-certs-706500: exit status 3 (18.602143476s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0410 22:44:44.620774   57954 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.10:22: connect: no route to host
	E0410 22:44:44.620795   57954 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.10:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-706500" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-862528 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-862528 create -f testdata/busybox.yaml: exit status 1 (45.097997ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-862528" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-862528 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-862528 -n old-k8s-version-862528
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-862528 -n old-k8s-version-862528: exit status 6 (230.106329ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0410 22:42:26.615792   56652 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-862528" does not appear in /home/jenkins/minikube-integration/18610-5679/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-862528" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-862528 -n old-k8s-version-862528
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-862528 -n old-k8s-version-862528: exit status 6 (238.062109ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0410 22:42:26.854070   56681 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-862528" does not appear in /home/jenkins/minikube-integration/18610-5679/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-862528" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (87.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-862528 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-862528 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m27.534064827s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-862528 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-862528 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-862528 describe deploy/metrics-server -n kube-system: exit status 1 (46.687917ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-862528" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-862528 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-862528 -n old-k8s-version-862528
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-862528 -n old-k8s-version-862528: exit status 6 (236.076936ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0410 22:43:54.671704   57587 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-862528" does not appear in /home/jenkins/minikube-integration/18610-5679/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-862528" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (87.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-646133 -n no-preload-646133
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-646133 -n no-preload-646133: exit status 3 (3.200040596s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0410 22:43:00.556813   56882 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.17:22: connect: no route to host
	E0410 22:43:00.556836   56882 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.17:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-646133 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-646133 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.151980396s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.17:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-646133 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-646133 -n no-preload-646133
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-646133 -n no-preload-646133: exit status 3 (3.063435738s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0410 22:43:09.772716   56996 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.17:22: connect: no route to host
	E0410 22:43:09.772735   56996 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.17:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-646133" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (764.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-862528 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-862528 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m40.767471905s)

                                                
                                                
-- stdout --
	* [old-k8s-version-862528] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18610
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-862528" primary control-plane node in "old-k8s-version-862528" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-862528" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0410 22:44:00.422136   57719 out.go:291] Setting OutFile to fd 1 ...
	I0410 22:44:00.422325   57719 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:44:00.422337   57719 out.go:304] Setting ErrFile to fd 2...
	I0410 22:44:00.422346   57719 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:44:00.422905   57719 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 22:44:00.423881   57719 out.go:298] Setting JSON to false
	I0410 22:44:00.424781   57719 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5183,"bootTime":1712783858,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0410 22:44:00.424844   57719 start.go:139] virtualization: kvm guest
	I0410 22:44:00.427057   57719 out.go:177] * [old-k8s-version-862528] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0410 22:44:00.428446   57719 notify.go:220] Checking for updates...
	I0410 22:44:00.428462   57719 out.go:177]   - MINIKUBE_LOCATION=18610
	I0410 22:44:00.430109   57719 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0410 22:44:00.431451   57719 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:44:00.432729   57719 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 22:44:00.434098   57719 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0410 22:44:00.435419   57719 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0410 22:44:00.437281   57719 config.go:182] Loaded profile config "old-k8s-version-862528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0410 22:44:00.437693   57719 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:44:00.437742   57719 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:44:00.453392   57719 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35215
	I0410 22:44:00.453830   57719 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:44:00.454432   57719 main.go:141] libmachine: Using API Version  1
	I0410 22:44:00.454460   57719 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:44:00.454903   57719 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:44:00.455147   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:44:00.457085   57719 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0410 22:44:00.458374   57719 driver.go:392] Setting default libvirt URI to qemu:///system
	I0410 22:44:00.458813   57719 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:44:00.458862   57719 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:44:00.473926   57719 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41029
	I0410 22:44:00.474356   57719 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:44:00.474820   57719 main.go:141] libmachine: Using API Version  1
	I0410 22:44:00.474842   57719 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:44:00.475157   57719 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:44:00.475392   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:44:00.512883   57719 out.go:177] * Using the kvm2 driver based on existing profile
	I0410 22:44:00.514232   57719 start.go:297] selected driver: kvm2
	I0410 22:44:00.514248   57719 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-862528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-862528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.178 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:44:00.514392   57719 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0410 22:44:00.515055   57719 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 22:44:00.515116   57719 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18610-5679/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0410 22:44:00.530287   57719 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0410 22:44:00.530789   57719 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 22:44:00.530880   57719 cni.go:84] Creating CNI manager for ""
	I0410 22:44:00.530903   57719 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:44:00.530952   57719 start.go:340] cluster config:
	{Name:old-k8s-version-862528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-862528 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.178 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:44:00.531075   57719 iso.go:125] acquiring lock: {Name:mk5a268a8a4dbf02629446a098c1de60574dce10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 22:44:00.532912   57719 out.go:177] * Starting "old-k8s-version-862528" primary control-plane node in "old-k8s-version-862528" cluster
	I0410 22:44:00.534343   57719 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0410 22:44:00.534401   57719 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0410 22:44:00.534418   57719 cache.go:56] Caching tarball of preloaded images
	I0410 22:44:00.534529   57719 preload.go:173] Found /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0410 22:44:00.534543   57719 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0410 22:44:00.534638   57719 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/config.json ...
	I0410 22:44:00.534830   57719 start.go:360] acquireMachinesLock for old-k8s-version-862528: {Name:mkcfcfa8b01b63726303cd37d33f01d452118635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0410 22:48:06.472768   57719 start.go:364] duration metric: took 4m5.937893783s to acquireMachinesLock for "old-k8s-version-862528"
	I0410 22:48:06.472833   57719 start.go:96] Skipping create...Using existing machine configuration
	I0410 22:48:06.472852   57719 fix.go:54] fixHost starting: 
	I0410 22:48:06.473157   57719 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:48:06.473186   57719 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:48:06.488728   57719 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41927
	I0410 22:48:06.489157   57719 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:48:06.489590   57719 main.go:141] libmachine: Using API Version  1
	I0410 22:48:06.489612   57719 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:48:06.490011   57719 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:48:06.490171   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:06.490337   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetState
	I0410 22:48:06.491997   57719 fix.go:112] recreateIfNeeded on old-k8s-version-862528: state=Stopped err=<nil>
	I0410 22:48:06.492030   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	W0410 22:48:06.492234   57719 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 22:48:06.493891   57719 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-862528" ...
	I0410 22:48:06.495233   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .Start
	I0410 22:48:06.495416   57719 main.go:141] libmachine: (old-k8s-version-862528) Ensuring networks are active...
	I0410 22:48:06.496254   57719 main.go:141] libmachine: (old-k8s-version-862528) Ensuring network default is active
	I0410 22:48:06.496589   57719 main.go:141] libmachine: (old-k8s-version-862528) Ensuring network mk-old-k8s-version-862528 is active
	I0410 22:48:06.497002   57719 main.go:141] libmachine: (old-k8s-version-862528) Getting domain xml...
	I0410 22:48:06.497751   57719 main.go:141] libmachine: (old-k8s-version-862528) Creating domain...
	I0410 22:48:07.722703   57719 main.go:141] libmachine: (old-k8s-version-862528) Waiting to get IP...
	I0410 22:48:07.723942   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:07.724373   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:07.724451   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:07.724338   59021 retry.go:31] will retry after 284.455366ms: waiting for machine to come up
	I0410 22:48:08.011077   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:08.011598   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:08.011628   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:08.011545   59021 retry.go:31] will retry after 337.946102ms: waiting for machine to come up
	I0410 22:48:08.351219   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:08.351725   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:08.351744   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:08.351691   59021 retry.go:31] will retry after 454.774669ms: waiting for machine to come up
	I0410 22:48:08.808516   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:08.808953   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:08.808991   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:08.808893   59021 retry.go:31] will retry after 484.667282ms: waiting for machine to come up
	I0410 22:48:09.295665   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:09.296127   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:09.296148   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:09.296083   59021 retry.go:31] will retry after 515.00238ms: waiting for machine to come up
	I0410 22:48:09.812855   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:09.813337   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:09.813362   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:09.813289   59021 retry.go:31] will retry after 596.67118ms: waiting for machine to come up
	I0410 22:48:10.411103   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:10.411616   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:10.411640   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:10.411568   59021 retry.go:31] will retry after 1.035822512s: waiting for machine to come up
	I0410 22:48:11.448894   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:11.449358   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:11.449388   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:11.449315   59021 retry.go:31] will retry after 1.258446774s: waiting for machine to come up
	I0410 22:48:12.709048   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:12.709587   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:12.709618   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:12.709530   59021 retry.go:31] will retry after 1.149380432s: waiting for machine to come up
	I0410 22:48:13.860550   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:13.861084   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:13.861110   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:13.861028   59021 retry.go:31] will retry after 1.733388735s: waiting for machine to come up
	I0410 22:48:15.595870   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:15.596447   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:15.596487   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:15.596343   59021 retry.go:31] will retry after 2.536794123s: waiting for machine to come up
	I0410 22:48:18.135592   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:18.136099   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:18.136128   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:18.136056   59021 retry.go:31] will retry after 3.390395523s: waiting for machine to come up
	I0410 22:48:21.528518   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:21.528964   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:21.529008   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:21.528906   59021 retry.go:31] will retry after 4.165145769s: waiting for machine to come up
	I0410 22:48:25.699595   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.700129   57719 main.go:141] libmachine: (old-k8s-version-862528) Found IP for machine: 192.168.61.178
	I0410 22:48:25.700159   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has current primary IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.700166   57719 main.go:141] libmachine: (old-k8s-version-862528) Reserving static IP address...
	I0410 22:48:25.700654   57719 main.go:141] libmachine: (old-k8s-version-862528) Reserved static IP address: 192.168.61.178
	I0410 22:48:25.700676   57719 main.go:141] libmachine: (old-k8s-version-862528) Waiting for SSH to be available...
	I0410 22:48:25.700704   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "old-k8s-version-862528", mac: "52:54:00:d0:b7:c9", ip: "192.168.61.178"} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.700732   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | skip adding static IP to network mk-old-k8s-version-862528 - found existing host DHCP lease matching {name: "old-k8s-version-862528", mac: "52:54:00:d0:b7:c9", ip: "192.168.61.178"}
	I0410 22:48:25.700745   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | Getting to WaitForSSH function...
	I0410 22:48:25.702929   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.703290   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.703322   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.703490   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | Using SSH client type: external
	I0410 22:48:25.703519   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | Using SSH private key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa (-rw-------)
	I0410 22:48:25.703551   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.178 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0410 22:48:25.703590   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | About to run SSH command:
	I0410 22:48:25.703635   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | exit 0
	I0410 22:48:25.832738   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | SSH cmd err, output: <nil>: 
	I0410 22:48:25.833133   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetConfigRaw
	I0410 22:48:25.833784   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetIP
	I0410 22:48:25.836323   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.836874   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.836908   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.837156   57719 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/config.json ...
	I0410 22:48:25.837472   57719 machine.go:94] provisionDockerMachine start ...
	I0410 22:48:25.837502   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:25.837710   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:25.840159   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.840488   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.840516   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.840593   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:25.840815   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:25.840992   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:25.841134   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:25.841337   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:25.841543   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:25.841556   57719 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 22:48:25.957153   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0410 22:48:25.957189   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetMachineName
	I0410 22:48:25.957438   57719 buildroot.go:166] provisioning hostname "old-k8s-version-862528"
	I0410 22:48:25.957461   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetMachineName
	I0410 22:48:25.957679   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:25.960779   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.961149   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.961184   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.961332   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:25.961546   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:25.961689   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:25.961864   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:25.962020   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:25.962196   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:25.962207   57719 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-862528 && echo "old-k8s-version-862528" | sudo tee /etc/hostname
	I0410 22:48:26.087073   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-862528
	
	I0410 22:48:26.087099   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.089770   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.090109   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.090140   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.090261   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.090446   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.090623   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.090760   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.090951   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:26.091131   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:26.091155   57719 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-862528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-862528/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-862528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:48:26.214422   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:48:26.214462   57719 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:48:26.214490   57719 buildroot.go:174] setting up certificates
	I0410 22:48:26.214498   57719 provision.go:84] configureAuth start
	I0410 22:48:26.214509   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetMachineName
	I0410 22:48:26.214793   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetIP
	I0410 22:48:26.217463   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.217809   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.217850   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.217975   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.219971   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.220235   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.220265   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.220480   57719 provision.go:143] copyHostCerts
	I0410 22:48:26.220526   57719 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:48:26.220542   57719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:48:26.220604   57719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:48:26.220703   57719 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:48:26.220712   57719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:48:26.220736   57719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:48:26.220789   57719 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:48:26.220796   57719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:48:26.220817   57719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:48:26.220864   57719 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-862528 san=[127.0.0.1 192.168.61.178 localhost minikube old-k8s-version-862528]
	I0410 22:48:26.288372   57719 provision.go:177] copyRemoteCerts
	I0410 22:48:26.288445   57719 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:48:26.288468   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.290980   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.291298   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.291339   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.291444   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.291635   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.291809   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.291927   57719 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa Username:docker}
	I0410 22:48:26.379823   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:48:26.405285   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0410 22:48:26.430122   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0410 22:48:26.456124   57719 provision.go:87] duration metric: took 241.614364ms to configureAuth
	I0410 22:48:26.456154   57719 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:48:26.456356   57719 config.go:182] Loaded profile config "old-k8s-version-862528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0410 22:48:26.456480   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.459028   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.459335   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.459366   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.459558   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.459742   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.459888   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.460037   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.460211   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:26.460379   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:26.460413   57719 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:48:26.732588   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:48:26.732614   57719 machine.go:97] duration metric: took 895.122467ms to provisionDockerMachine
	I0410 22:48:26.732627   57719 start.go:293] postStartSetup for "old-k8s-version-862528" (driver="kvm2")
	I0410 22:48:26.732641   57719 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:48:26.732679   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.733014   57719 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:48:26.733044   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.735820   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.736217   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.736244   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.736418   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.736630   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.736840   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.737020   57719 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa Username:docker}
	I0410 22:48:26.823452   57719 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:48:26.827806   57719 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:48:26.827827   57719 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:48:26.827899   57719 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:48:26.828009   57719 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:48:26.828122   57719 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:48:26.837564   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:48:26.862278   57719 start.go:296] duration metric: took 129.638185ms for postStartSetup
	I0410 22:48:26.862325   57719 fix.go:56] duration metric: took 20.389482643s for fixHost
	I0410 22:48:26.862346   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.864911   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.865277   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.865301   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.865419   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.865597   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.865742   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.865872   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.866083   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:26.866283   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:26.866300   57719 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0410 22:48:26.977317   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712789306.948982315
	
	I0410 22:48:26.977337   57719 fix.go:216] guest clock: 1712789306.948982315
	I0410 22:48:26.977344   57719 fix.go:229] Guest: 2024-04-10 22:48:26.948982315 +0000 UTC Remote: 2024-04-10 22:48:26.862329953 +0000 UTC m=+266.486936912 (delta=86.652362ms)
	I0410 22:48:26.977362   57719 fix.go:200] guest clock delta is within tolerance: 86.652362ms
	I0410 22:48:26.977366   57719 start.go:83] releasing machines lock for "old-k8s-version-862528", held for 20.504554043s
	I0410 22:48:26.977386   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.977653   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetIP
	I0410 22:48:26.980035   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.980376   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.980419   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.980602   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.981224   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.981421   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.981516   57719 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:48:26.981558   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.981645   57719 ssh_runner.go:195] Run: cat /version.json
	I0410 22:48:26.981670   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.984375   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.984568   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.984840   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.984868   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.984953   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.985030   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.985079   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.985118   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.985236   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.985277   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.985374   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.985450   57719 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa Username:docker}
	I0410 22:48:26.985516   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.985635   57719 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa Username:docker}
	I0410 22:48:27.105002   57719 ssh_runner.go:195] Run: systemctl --version
	I0410 22:48:27.111205   57719 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:48:27.261678   57719 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 22:48:27.268336   57719 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:48:27.268423   57719 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:48:27.290099   57719 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0410 22:48:27.290122   57719 start.go:494] detecting cgroup driver to use...
	I0410 22:48:27.290174   57719 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:48:27.308787   57719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:48:27.325557   57719 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:48:27.325611   57719 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:48:27.340859   57719 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:48:27.355570   57719 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:48:27.479670   57719 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:48:27.653364   57719 docker.go:233] disabling docker service ...
	I0410 22:48:27.653424   57719 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:48:27.669775   57719 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:48:27.683654   57719 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:48:27.813212   57719 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:48:27.929620   57719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:48:27.946085   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:48:27.966341   57719 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0410 22:48:27.966404   57719 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:27.978022   57719 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:48:27.978111   57719 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:27.989324   57719 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:28.001429   57719 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:28.012965   57719 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:48:28.024663   57719 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:48:28.034362   57719 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0410 22:48:28.034423   57719 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0410 22:48:28.048740   57719 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:48:28.060698   57719 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:48:28.188526   57719 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:48:28.348442   57719 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 22:48:28.348523   57719 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 22:48:28.353501   57719 start.go:562] Will wait 60s for crictl version
	I0410 22:48:28.353566   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:28.357486   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 22:48:28.391138   57719 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 22:48:28.391221   57719 ssh_runner.go:195] Run: crio --version
	I0410 22:48:28.421399   57719 ssh_runner.go:195] Run: crio --version
	I0410 22:48:28.455851   57719 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0410 22:48:28.457534   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetIP
	I0410 22:48:28.460913   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:28.461297   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:28.461323   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:28.461558   57719 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0410 22:48:28.466450   57719 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:48:28.480549   57719 kubeadm.go:877] updating cluster {Name:old-k8s-version-862528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-862528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.178 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 22:48:28.480671   57719 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0410 22:48:28.480775   57719 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:48:28.536971   57719 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0410 22:48:28.537034   57719 ssh_runner.go:195] Run: which lz4
	I0410 22:48:28.541757   57719 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0410 22:48:28.546381   57719 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0410 22:48:28.546413   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0410 22:48:30.411805   57719 crio.go:462] duration metric: took 1.870076139s to copy over tarball
	I0410 22:48:30.411900   57719 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0410 22:48:33.358026   57719 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.946092727s)
	I0410 22:48:33.358059   57719 crio.go:469] duration metric: took 2.946222933s to extract the tarball
	I0410 22:48:33.358069   57719 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0410 22:48:33.402924   57719 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:48:33.441006   57719 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0410 22:48:33.441033   57719 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0410 22:48:33.441090   57719 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:48:33.441142   57719 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.441203   57719 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.441210   57719 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.441318   57719 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0410 22:48:33.441339   57719 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.441375   57719 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.441395   57719 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.442645   57719 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.442667   57719 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.442706   57719 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.442717   57719 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0410 22:48:33.442796   57719 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.442807   57719 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.442814   57719 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:48:33.442866   57719 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.651119   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.652634   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.665548   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.669396   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.672510   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.674137   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0410 22:48:33.686915   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.756592   57719 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0410 22:48:33.756639   57719 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.756696   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.756696   57719 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0410 22:48:33.756789   57719 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.756810   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867043   57719 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0410 22:48:33.867061   57719 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0410 22:48:33.867090   57719 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.867091   57719 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.867135   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867166   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867185   57719 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0410 22:48:33.867220   57719 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.867252   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867261   57719 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0410 22:48:33.867303   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.867311   57719 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0410 22:48:33.867355   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867359   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.867286   57719 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0410 22:48:33.867452   57719 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.867481   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.871719   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.881086   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.964827   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.964854   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0410 22:48:33.964932   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0410 22:48:33.964948   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.976084   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0410 22:48:33.976155   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0410 22:48:33.976205   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0410 22:48:34.011460   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0410 22:48:34.038739   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0410 22:48:34.038739   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0410 22:48:34.289751   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:48:34.429542   57719 cache_images.go:92] duration metric: took 988.487885ms to LoadCachedImages
	W0410 22:48:34.429636   57719 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0410 22:48:34.429665   57719 kubeadm.go:928] updating node { 192.168.61.178 8443 v1.20.0 crio true true} ...
	I0410 22:48:34.429782   57719 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-862528 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.178
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-862528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 22:48:34.429870   57719 ssh_runner.go:195] Run: crio config
	I0410 22:48:34.478794   57719 cni.go:84] Creating CNI manager for ""
	I0410 22:48:34.478829   57719 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:48:34.478845   57719 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 22:48:34.478868   57719 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.178 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-862528 NodeName:old-k8s-version-862528 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.178"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.178 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0410 22:48:34.479065   57719 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.178
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-862528"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.178
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.178"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 22:48:34.479147   57719 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0410 22:48:34.489950   57719 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 22:48:34.490007   57719 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 22:48:34.500261   57719 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0410 22:48:34.517530   57719 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0410 22:48:34.534814   57719 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0410 22:48:34.552669   57719 ssh_runner.go:195] Run: grep 192.168.61.178	control-plane.minikube.internal$ /etc/hosts
	I0410 22:48:34.556612   57719 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.178	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:48:34.569643   57719 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:48:34.700791   57719 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:48:34.719682   57719 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528 for IP: 192.168.61.178
	I0410 22:48:34.719703   57719 certs.go:194] generating shared ca certs ...
	I0410 22:48:34.719722   57719 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:48:34.719900   57719 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 22:48:34.719951   57719 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 22:48:34.719965   57719 certs.go:256] generating profile certs ...
	I0410 22:48:34.720091   57719 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/client.key
	I0410 22:48:34.720155   57719 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.key.a46c310c
	I0410 22:48:34.720199   57719 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/proxy-client.key
	I0410 22:48:34.720337   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 22:48:34.720376   57719 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 22:48:34.720386   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 22:48:34.720438   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 22:48:34.720472   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 22:48:34.720502   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 22:48:34.720557   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:48:34.721238   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 22:48:34.769810   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 22:48:34.805397   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 22:48:34.846743   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 22:48:34.888720   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0410 22:48:34.915958   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0410 22:48:34.962182   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 22:48:34.992444   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0410 22:48:35.023525   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 22:48:35.051098   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 22:48:35.077305   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 22:48:35.102172   57719 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 22:48:35.121381   57719 ssh_runner.go:195] Run: openssl version
	I0410 22:48:35.127869   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 22:48:35.140056   57719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:35.145172   57719 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:35.145242   57719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:35.152081   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 22:48:35.164621   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 22:48:35.176511   57719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 22:48:35.182164   57719 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:48:35.182217   57719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 22:48:35.188968   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 22:48:35.201491   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 22:48:35.213468   57719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 22:48:35.218519   57719 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:48:35.218586   57719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 22:48:35.224872   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 22:48:35.236964   57719 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:48:35.242262   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0410 22:48:35.249245   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0410 22:48:35.256301   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0410 22:48:35.263359   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0410 22:48:35.270166   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0410 22:48:35.276953   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0410 22:48:35.283529   57719 kubeadm.go:391] StartCluster: {Name:old-k8s-version-862528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-862528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.178 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:48:35.283643   57719 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 22:48:35.283700   57719 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:48:35.328461   57719 cri.go:89] found id: ""
	I0410 22:48:35.328532   57719 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0410 22:48:35.340207   57719 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0410 22:48:35.340235   57719 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0410 22:48:35.340245   57719 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0410 22:48:35.340293   57719 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0410 22:48:35.351212   57719 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0410 22:48:35.352189   57719 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-862528" does not appear in /home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:48:35.352989   57719 kubeconfig.go:62] /home/jenkins/minikube-integration/18610-5679/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-862528" cluster setting kubeconfig missing "old-k8s-version-862528" context setting]
	I0410 22:48:35.353956   57719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/kubeconfig: {Name:mkd6f498bec10d7b0d2291f83f5e27766227bc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:48:35.428830   57719 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0410 22:48:35.479813   57719 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.178
	I0410 22:48:35.479853   57719 kubeadm.go:1154] stopping kube-system containers ...
	I0410 22:48:35.479882   57719 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0410 22:48:35.479940   57719 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:48:35.520506   57719 cri.go:89] found id: ""
	I0410 22:48:35.520577   57719 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0410 22:48:35.538167   57719 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:48:35.548571   57719 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:48:35.548600   57719 kubeadm.go:156] found existing configuration files:
	
	I0410 22:48:35.548662   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:48:35.558559   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:48:35.558612   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:48:35.568950   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:48:35.578644   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:48:35.578712   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:48:35.589075   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:48:35.600265   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:48:35.600321   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:48:35.611459   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:48:35.621712   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:48:35.621785   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:48:35.632133   57719 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:48:35.643494   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:35.775309   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:37.133286   57719 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.35793645s)
	I0410 22:48:37.133334   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:37.368687   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:37.497136   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:37.584652   57719 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:48:37.584744   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:38.085293   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:38.585489   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:39.084919   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:39.584951   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:40.085144   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:40.585356   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:41.084839   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:41.585434   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:42.085797   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:42.585578   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:43.085621   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:43.585581   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:44.085251   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:44.584785   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:45.085394   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:45.584769   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:46.085396   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:46.585857   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:47.085186   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:47.585668   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:48.085585   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:48.585617   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:49.085227   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:49.585626   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:50.084900   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:50.585691   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:51.085669   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:51.585308   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:52.085393   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:52.585619   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:53.085643   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:53.585076   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:54.085251   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:54.585027   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:55.085629   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:55.585506   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:56.084892   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:56.585876   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:57.085775   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:57.585260   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:58.085635   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:58.585588   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:59.085661   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:59.585663   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:00.085635   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:00.585234   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:01.084884   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:01.585066   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:02.085697   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:02.585573   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:03.085552   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:03.585521   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:04.084919   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:04.584802   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:05.085266   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:05.585408   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:06.085250   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:06.585503   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:07.085422   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:07.584909   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:08.084863   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:08.585859   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:09.085175   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:09.585660   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:10.085221   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:10.585333   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:11.085466   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:11.585062   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:12.085191   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:12.585644   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:13.085615   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:13.585355   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:14.085270   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:14.584868   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:15.085639   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:15.585476   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:16.085404   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:16.585123   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:17.085713   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:17.584877   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:18.085601   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:18.585222   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:19.084891   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:19.585215   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:20.085668   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:20.585629   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:21.084898   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:21.585346   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:22.085672   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:22.585768   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:23.085613   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:23.585507   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:24.085104   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:24.585745   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:25.084858   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:25.585095   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:26.085119   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:26.585846   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:27.084920   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:27.585251   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:28.084926   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:28.585643   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:29.084937   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:29.585666   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:30.085088   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:30.585515   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:31.085273   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:31.585347   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:32.084892   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:32.585361   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:33.085648   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:33.585256   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:34.084938   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:34.585005   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:35.085466   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:35.585228   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:36.085699   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:36.585690   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:37.085760   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:37.584867   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:37.584947   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:37.625964   57719 cri.go:89] found id: ""
	I0410 22:49:37.625989   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.625996   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:37.626001   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:37.626046   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:37.669151   57719 cri.go:89] found id: ""
	I0410 22:49:37.669178   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.669188   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:37.669194   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:37.669242   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:37.711426   57719 cri.go:89] found id: ""
	I0410 22:49:37.711456   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.711466   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:37.711474   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:37.711538   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:37.754678   57719 cri.go:89] found id: ""
	I0410 22:49:37.754707   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.754719   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:37.754726   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:37.754809   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:37.795259   57719 cri.go:89] found id: ""
	I0410 22:49:37.795291   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.795301   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:37.795307   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:37.795375   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:37.836961   57719 cri.go:89] found id: ""
	I0410 22:49:37.836994   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.837004   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:37.837011   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:37.837075   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:37.876195   57719 cri.go:89] found id: ""
	I0410 22:49:37.876223   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.876233   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:37.876239   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:37.876290   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:37.911688   57719 cri.go:89] found id: ""
	I0410 22:49:37.911715   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.911725   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:37.911736   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:37.911751   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:37.954690   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:37.954734   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:38.006731   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:38.006771   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:38.024290   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:38.024314   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:38.148504   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:38.148529   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:38.148561   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:40.726314   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:40.743098   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:40.743168   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:40.794673   57719 cri.go:89] found id: ""
	I0410 22:49:40.794697   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.794704   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:40.794710   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:40.794756   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:40.836274   57719 cri.go:89] found id: ""
	I0410 22:49:40.836308   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.836319   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:40.836327   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:40.836408   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:40.882249   57719 cri.go:89] found id: ""
	I0410 22:49:40.882276   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.882285   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:40.882292   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:40.882357   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:40.925829   57719 cri.go:89] found id: ""
	I0410 22:49:40.925867   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.925878   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:40.925885   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:40.925936   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:40.978494   57719 cri.go:89] found id: ""
	I0410 22:49:40.978529   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.978540   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:40.978547   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:40.978611   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:41.020935   57719 cri.go:89] found id: ""
	I0410 22:49:41.020964   57719 logs.go:276] 0 containers: []
	W0410 22:49:41.020975   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:41.020982   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:41.021040   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:41.060779   57719 cri.go:89] found id: ""
	I0410 22:49:41.060812   57719 logs.go:276] 0 containers: []
	W0410 22:49:41.060824   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:41.060831   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:41.060885   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:41.119604   57719 cri.go:89] found id: ""
	I0410 22:49:41.119632   57719 logs.go:276] 0 containers: []
	W0410 22:49:41.119643   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:41.119653   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:41.119667   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:41.188739   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:41.188774   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:41.203682   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:41.203735   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:41.293423   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:41.293451   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:41.293468   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:41.366606   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:41.366649   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:43.914447   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:43.930350   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:43.930439   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:43.968867   57719 cri.go:89] found id: ""
	I0410 22:49:43.968921   57719 logs.go:276] 0 containers: []
	W0410 22:49:43.968932   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:43.968939   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:43.969012   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:44.010143   57719 cri.go:89] found id: ""
	I0410 22:49:44.010169   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.010181   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:44.010188   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:44.010264   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:44.048610   57719 cri.go:89] found id: ""
	I0410 22:49:44.048637   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.048645   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:44.048651   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:44.048697   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:44.105939   57719 cri.go:89] found id: ""
	I0410 22:49:44.105973   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.106001   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:44.106009   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:44.106086   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:44.149699   57719 cri.go:89] found id: ""
	I0410 22:49:44.149726   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.149735   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:44.149743   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:44.149803   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:44.193131   57719 cri.go:89] found id: ""
	I0410 22:49:44.193159   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.193167   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:44.193173   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:44.193255   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:44.233751   57719 cri.go:89] found id: ""
	I0410 22:49:44.233781   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.233789   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:44.233801   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:44.233868   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:44.284404   57719 cri.go:89] found id: ""
	I0410 22:49:44.284432   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.284441   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:44.284449   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:44.284461   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:44.330082   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:44.330118   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:44.383452   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:44.383487   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:44.399604   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:44.399632   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:44.476328   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:44.476368   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:44.476415   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:47.054122   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:47.069583   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:47.069654   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:47.113953   57719 cri.go:89] found id: ""
	I0410 22:49:47.113981   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.113989   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:47.113995   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:47.114054   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:47.156770   57719 cri.go:89] found id: ""
	I0410 22:49:47.156798   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.156808   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:47.156814   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:47.156891   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:47.195227   57719 cri.go:89] found id: ""
	I0410 22:49:47.195252   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.195261   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:47.195266   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:47.195328   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:47.238109   57719 cri.go:89] found id: ""
	I0410 22:49:47.238138   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.238150   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:47.238157   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:47.238212   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:47.285062   57719 cri.go:89] found id: ""
	I0410 22:49:47.285093   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.285101   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:47.285108   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:47.285185   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:47.324635   57719 cri.go:89] found id: ""
	I0410 22:49:47.324663   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.324670   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:47.324676   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:47.324744   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:47.365404   57719 cri.go:89] found id: ""
	I0410 22:49:47.365437   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.365445   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:47.365468   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:47.365535   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:47.412296   57719 cri.go:89] found id: ""
	I0410 22:49:47.412335   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.412346   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:47.412367   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:47.412384   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:47.497998   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:47.498019   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:47.498033   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:47.590502   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:47.590536   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:47.647665   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:47.647692   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:47.697704   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:47.697741   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:50.213410   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:50.229408   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:50.229488   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:50.268514   57719 cri.go:89] found id: ""
	I0410 22:49:50.268545   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.268556   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:50.268563   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:50.268620   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:50.308733   57719 cri.go:89] found id: ""
	I0410 22:49:50.308762   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.308790   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:50.308796   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:50.308857   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:50.353929   57719 cri.go:89] found id: ""
	I0410 22:49:50.353966   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.353977   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:50.353985   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:50.354043   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:50.397979   57719 cri.go:89] found id: ""
	I0410 22:49:50.398009   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.398019   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:50.398026   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:50.398086   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:50.436191   57719 cri.go:89] found id: ""
	I0410 22:49:50.436222   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.436234   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:50.436241   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:50.436316   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:50.476462   57719 cri.go:89] found id: ""
	I0410 22:49:50.476486   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.476494   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:50.476499   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:50.476557   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:50.520025   57719 cri.go:89] found id: ""
	I0410 22:49:50.520054   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.520063   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:50.520071   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:50.520127   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:50.564535   57719 cri.go:89] found id: ""
	I0410 22:49:50.564570   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.564581   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:50.564593   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:50.564624   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:50.620587   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:50.620629   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:50.634802   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:50.634832   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:50.707625   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:50.707655   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:50.707671   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:50.791935   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:50.791970   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:53.339109   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:53.361555   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:53.361632   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:53.428170   57719 cri.go:89] found id: ""
	I0410 22:49:53.428202   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.428212   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:53.428219   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:53.428281   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:53.501929   57719 cri.go:89] found id: ""
	I0410 22:49:53.501957   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.501968   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:53.501977   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:53.502055   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:53.548844   57719 cri.go:89] found id: ""
	I0410 22:49:53.548871   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.548890   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:53.548897   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:53.548949   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:53.595056   57719 cri.go:89] found id: ""
	I0410 22:49:53.595081   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.595090   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:53.595098   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:53.595153   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:53.638885   57719 cri.go:89] found id: ""
	I0410 22:49:53.638920   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.638938   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:53.638946   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:53.639046   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:53.685526   57719 cri.go:89] found id: ""
	I0410 22:49:53.685565   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.685573   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:53.685579   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:53.685650   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:53.725084   57719 cri.go:89] found id: ""
	I0410 22:49:53.725112   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.725119   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:53.725125   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:53.725172   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:53.767031   57719 cri.go:89] found id: ""
	I0410 22:49:53.767062   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.767072   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:53.767083   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:53.767103   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:53.826570   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:53.826618   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:53.843784   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:53.843822   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:53.926277   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:53.926299   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:53.926317   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:54.024735   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:54.024782   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:56.586265   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:56.602113   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:56.602200   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:56.647041   57719 cri.go:89] found id: ""
	I0410 22:49:56.647074   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.647086   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:56.647094   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:56.647168   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:56.688053   57719 cri.go:89] found id: ""
	I0410 22:49:56.688086   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.688096   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:56.688104   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:56.688190   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:56.729176   57719 cri.go:89] found id: ""
	I0410 22:49:56.729210   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.729221   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:56.729229   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:56.729293   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:56.768877   57719 cri.go:89] found id: ""
	I0410 22:49:56.768905   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.768913   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:56.768919   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:56.768966   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:56.807228   57719 cri.go:89] found id: ""
	I0410 22:49:56.807274   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.807286   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:56.807294   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:56.807361   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:56.848183   57719 cri.go:89] found id: ""
	I0410 22:49:56.848216   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.848224   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:56.848230   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:56.848284   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:56.887894   57719 cri.go:89] found id: ""
	I0410 22:49:56.887923   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.887931   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:56.887937   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:56.887993   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:56.926908   57719 cri.go:89] found id: ""
	I0410 22:49:56.926935   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.926944   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:56.926952   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:56.926968   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:57.012614   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:57.012640   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:57.012657   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:57.098735   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:57.098784   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:57.140798   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:57.140831   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:57.204239   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:57.204283   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:59.720328   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:59.735964   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:59.736042   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:59.774351   57719 cri.go:89] found id: ""
	I0410 22:49:59.774383   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.774393   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:59.774407   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:59.774468   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:59.817222   57719 cri.go:89] found id: ""
	I0410 22:49:59.817248   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.817255   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:59.817260   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:59.817310   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:59.854551   57719 cri.go:89] found id: ""
	I0410 22:49:59.854582   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.854594   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:59.854602   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:59.854656   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:59.894334   57719 cri.go:89] found id: ""
	I0410 22:49:59.894367   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.894375   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:59.894381   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:59.894442   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:59.932446   57719 cri.go:89] found id: ""
	I0410 22:49:59.932472   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.932482   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:59.932489   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:59.932552   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:59.969168   57719 cri.go:89] found id: ""
	I0410 22:49:59.969193   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.969201   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:59.969209   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:59.969273   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:00.006918   57719 cri.go:89] found id: ""
	I0410 22:50:00.006960   57719 logs.go:276] 0 containers: []
	W0410 22:50:00.006972   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:00.006979   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:00.007036   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:00.050380   57719 cri.go:89] found id: ""
	I0410 22:50:00.050411   57719 logs.go:276] 0 containers: []
	W0410 22:50:00.050424   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:00.050433   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:00.050454   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:00.066340   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:00.066366   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:00.146454   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:00.146479   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:00.146494   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:00.231174   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:00.231225   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:00.278732   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:00.278759   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:02.833035   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:02.847316   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:02.847380   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:02.888793   57719 cri.go:89] found id: ""
	I0410 22:50:02.888821   57719 logs.go:276] 0 containers: []
	W0410 22:50:02.888832   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:02.888840   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:02.888897   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:02.926495   57719 cri.go:89] found id: ""
	I0410 22:50:02.926525   57719 logs.go:276] 0 containers: []
	W0410 22:50:02.926535   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:02.926542   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:02.926603   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:02.966185   57719 cri.go:89] found id: ""
	I0410 22:50:02.966217   57719 logs.go:276] 0 containers: []
	W0410 22:50:02.966227   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:02.966233   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:02.966295   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:03.007383   57719 cri.go:89] found id: ""
	I0410 22:50:03.007408   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.007414   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:03.007420   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:03.007490   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:03.044245   57719 cri.go:89] found id: ""
	I0410 22:50:03.044273   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.044281   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:03.044292   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:03.044367   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:03.078820   57719 cri.go:89] found id: ""
	I0410 22:50:03.078849   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.078859   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:03.078866   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:03.078927   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:03.117205   57719 cri.go:89] found id: ""
	I0410 22:50:03.117233   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.117244   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:03.117251   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:03.117313   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:03.155698   57719 cri.go:89] found id: ""
	I0410 22:50:03.155725   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.155735   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:03.155743   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:03.155758   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:03.231685   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:03.231712   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:03.231724   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:03.315122   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:03.315167   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:03.361151   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:03.361186   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:03.412134   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:03.412168   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:05.928116   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:05.942237   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:05.942337   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:05.983813   57719 cri.go:89] found id: ""
	I0410 22:50:05.983842   57719 logs.go:276] 0 containers: []
	W0410 22:50:05.983853   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:05.983861   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:05.983945   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:06.024590   57719 cri.go:89] found id: ""
	I0410 22:50:06.024618   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.024626   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:06.024637   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:06.024698   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:06.063040   57719 cri.go:89] found id: ""
	I0410 22:50:06.063075   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.063087   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:06.063094   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:06.063160   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:06.102224   57719 cri.go:89] found id: ""
	I0410 22:50:06.102250   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.102259   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:06.102273   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:06.102342   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:06.144202   57719 cri.go:89] found id: ""
	I0410 22:50:06.144229   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.144236   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:06.144242   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:06.144288   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:06.189215   57719 cri.go:89] found id: ""
	I0410 22:50:06.189243   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.189250   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:06.189256   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:06.189308   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:06.225218   57719 cri.go:89] found id: ""
	I0410 22:50:06.225247   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.225258   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:06.225266   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:06.225330   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:06.265229   57719 cri.go:89] found id: ""
	I0410 22:50:06.265262   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.265273   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:06.265283   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:06.265306   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:06.279794   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:06.279825   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:06.348038   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:06.348063   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:06.348079   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:06.431293   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:06.431339   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:06.476033   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:06.476060   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:09.032099   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:09.046628   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:09.046765   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:09.086900   57719 cri.go:89] found id: ""
	I0410 22:50:09.086928   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.086936   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:09.086942   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:09.086998   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:09.124989   57719 cri.go:89] found id: ""
	I0410 22:50:09.125018   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.125028   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:09.125035   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:09.125096   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:09.163720   57719 cri.go:89] found id: ""
	I0410 22:50:09.163749   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.163761   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:09.163769   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:09.163822   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:09.203846   57719 cri.go:89] found id: ""
	I0410 22:50:09.203875   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.203883   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:09.203888   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:09.203945   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:09.242974   57719 cri.go:89] found id: ""
	I0410 22:50:09.243002   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.243016   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:09.243024   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:09.243092   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:09.278664   57719 cri.go:89] found id: ""
	I0410 22:50:09.278687   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.278694   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:09.278700   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:09.278762   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:09.313335   57719 cri.go:89] found id: ""
	I0410 22:50:09.313359   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.313367   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:09.313372   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:09.313419   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:09.351160   57719 cri.go:89] found id: ""
	I0410 22:50:09.351195   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.351206   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:09.351225   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:09.351239   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:09.425989   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:09.426015   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:09.426033   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:09.505189   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:09.505223   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:09.549619   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:09.549651   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:09.604322   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:09.604360   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:12.119780   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:12.135377   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:12.135458   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:12.178105   57719 cri.go:89] found id: ""
	I0410 22:50:12.178129   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.178138   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:12.178144   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:12.178207   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:12.217369   57719 cri.go:89] found id: ""
	I0410 22:50:12.217397   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.217409   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:12.217424   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:12.217488   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:12.254185   57719 cri.go:89] found id: ""
	I0410 22:50:12.254213   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.254222   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:12.254230   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:12.254291   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:12.295007   57719 cri.go:89] found id: ""
	I0410 22:50:12.295038   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.295048   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:12.295057   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:12.295125   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:12.334620   57719 cri.go:89] found id: ""
	I0410 22:50:12.334644   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.334651   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:12.334657   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:12.334707   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:12.371217   57719 cri.go:89] found id: ""
	I0410 22:50:12.371241   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.371249   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:12.371255   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:12.371302   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:12.409571   57719 cri.go:89] found id: ""
	I0410 22:50:12.409599   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.409608   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:12.409617   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:12.409675   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:12.453133   57719 cri.go:89] found id: ""
	I0410 22:50:12.453159   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.453169   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:12.453180   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:12.453194   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:12.505322   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:12.505360   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:12.520284   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:12.520315   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:12.608057   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:12.608082   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:12.608097   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:12.693240   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:12.693274   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:15.244628   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:15.261915   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:15.262020   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:15.302874   57719 cri.go:89] found id: ""
	I0410 22:50:15.302903   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.302910   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:15.302916   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:15.302973   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:15.347492   57719 cri.go:89] found id: ""
	I0410 22:50:15.347518   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.347527   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:15.347534   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:15.347598   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:15.394156   57719 cri.go:89] found id: ""
	I0410 22:50:15.394188   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.394198   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:15.394205   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:15.394265   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:15.437656   57719 cri.go:89] found id: ""
	I0410 22:50:15.437682   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.437690   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:15.437695   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:15.437748   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:15.475658   57719 cri.go:89] found id: ""
	I0410 22:50:15.475686   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.475697   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:15.475704   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:15.475765   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:15.517908   57719 cri.go:89] found id: ""
	I0410 22:50:15.517930   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.517937   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:15.517942   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:15.517991   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:15.560083   57719 cri.go:89] found id: ""
	I0410 22:50:15.560108   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.560117   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:15.560123   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:15.560178   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:15.603967   57719 cri.go:89] found id: ""
	I0410 22:50:15.603994   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.604002   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:15.604013   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:15.604028   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:15.659994   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:15.660029   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:15.675627   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:15.675658   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:15.761297   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:15.761320   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:15.761339   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:15.839225   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:15.839265   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:18.386062   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:18.399609   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:18.399677   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:18.443002   57719 cri.go:89] found id: ""
	I0410 22:50:18.443030   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.443040   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:18.443048   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:18.443106   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:18.485089   57719 cri.go:89] found id: ""
	I0410 22:50:18.485121   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.485132   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:18.485140   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:18.485200   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:18.524310   57719 cri.go:89] found id: ""
	I0410 22:50:18.524338   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.524347   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:18.524354   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:18.524412   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:18.563535   57719 cri.go:89] found id: ""
	I0410 22:50:18.563573   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.563582   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:18.563587   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:18.563634   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:18.600451   57719 cri.go:89] found id: ""
	I0410 22:50:18.600478   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.600487   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:18.600495   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:18.600562   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:18.640445   57719 cri.go:89] found id: ""
	I0410 22:50:18.640472   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.640480   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:18.640485   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:18.640550   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:18.677691   57719 cri.go:89] found id: ""
	I0410 22:50:18.677725   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.677746   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:18.677754   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:18.677817   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:18.716753   57719 cri.go:89] found id: ""
	I0410 22:50:18.716850   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.716876   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:18.716897   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:18.716918   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:18.804099   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:18.804130   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:18.804144   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:18.883569   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:18.883611   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:18.930014   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:18.930045   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:18.980029   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:18.980065   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:21.495499   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:21.511001   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:21.511075   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:21.551469   57719 cri.go:89] found id: ""
	I0410 22:50:21.551511   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.551522   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:21.551540   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:21.551605   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:21.590539   57719 cri.go:89] found id: ""
	I0410 22:50:21.590570   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.590580   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:21.590587   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:21.590654   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:21.629005   57719 cri.go:89] found id: ""
	I0410 22:50:21.629030   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.629042   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:21.629048   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:21.629108   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:21.669745   57719 cri.go:89] found id: ""
	I0410 22:50:21.669767   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.669774   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:21.669780   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:21.669834   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:21.707806   57719 cri.go:89] found id: ""
	I0410 22:50:21.707831   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.707839   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:21.707844   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:21.707892   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:21.746698   57719 cri.go:89] found id: ""
	I0410 22:50:21.746727   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.746736   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:21.746742   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:21.746802   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:21.783048   57719 cri.go:89] found id: ""
	I0410 22:50:21.783070   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.783079   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:21.783084   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:21.783131   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:21.822457   57719 cri.go:89] found id: ""
	I0410 22:50:21.822484   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.822492   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:21.822500   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:21.822513   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:21.894706   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:21.894747   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:21.909861   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:21.909903   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:21.999344   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:21.999370   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:21.999386   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:22.080004   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:22.080042   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:24.620924   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:24.634937   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:24.634999   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:24.686619   57719 cri.go:89] found id: ""
	I0410 22:50:24.686644   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.686655   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:24.686662   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:24.686744   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:24.723632   57719 cri.go:89] found id: ""
	I0410 22:50:24.723658   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.723667   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:24.723675   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:24.723738   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:24.760708   57719 cri.go:89] found id: ""
	I0410 22:50:24.760739   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.760750   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:24.760757   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:24.760804   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:24.795680   57719 cri.go:89] found id: ""
	I0410 22:50:24.795712   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.795722   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:24.795729   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:24.795793   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:24.833033   57719 cri.go:89] found id: ""
	I0410 22:50:24.833063   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.833074   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:24.833082   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:24.833130   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:24.872840   57719 cri.go:89] found id: ""
	I0410 22:50:24.872864   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.872871   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:24.872877   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:24.872936   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:24.915640   57719 cri.go:89] found id: ""
	I0410 22:50:24.915678   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.915688   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:24.915696   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:24.915755   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:24.957164   57719 cri.go:89] found id: ""
	I0410 22:50:24.957207   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.957219   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:24.957230   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:24.957244   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:25.006551   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:25.006601   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:25.021623   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:25.021649   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:25.094699   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:25.094722   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:25.094741   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:25.181280   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:25.181316   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:27.723475   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:27.737294   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:27.737381   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:27.776098   57719 cri.go:89] found id: ""
	I0410 22:50:27.776126   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.776138   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:27.776146   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:27.776203   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:27.814324   57719 cri.go:89] found id: ""
	I0410 22:50:27.814352   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.814364   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:27.814371   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:27.814447   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:27.849573   57719 cri.go:89] found id: ""
	I0410 22:50:27.849603   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.849614   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:27.849621   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:27.849682   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:27.888904   57719 cri.go:89] found id: ""
	I0410 22:50:27.888932   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.888940   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:27.888946   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:27.888993   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:27.931772   57719 cri.go:89] found id: ""
	I0410 22:50:27.931800   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.931812   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:27.931821   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:27.931881   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:27.975633   57719 cri.go:89] found id: ""
	I0410 22:50:27.975666   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.975676   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:27.975684   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:27.975736   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:28.012251   57719 cri.go:89] found id: ""
	I0410 22:50:28.012280   57719 logs.go:276] 0 containers: []
	W0410 22:50:28.012290   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:28.012298   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:28.012364   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:28.048848   57719 cri.go:89] found id: ""
	I0410 22:50:28.048886   57719 logs.go:276] 0 containers: []
	W0410 22:50:28.048898   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:28.048908   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:28.048923   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:28.102215   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:28.102257   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:28.118052   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:28.118081   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:28.190738   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:28.190762   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:28.190777   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:28.269294   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:28.269330   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:30.833927   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:30.848196   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:30.848266   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:30.886077   57719 cri.go:89] found id: ""
	I0410 22:50:30.886117   57719 logs.go:276] 0 containers: []
	W0410 22:50:30.886127   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:30.886133   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:30.886179   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:30.924638   57719 cri.go:89] found id: ""
	I0410 22:50:30.924668   57719 logs.go:276] 0 containers: []
	W0410 22:50:30.924678   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:30.924686   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:30.924762   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:30.961106   57719 cri.go:89] found id: ""
	I0410 22:50:30.961136   57719 logs.go:276] 0 containers: []
	W0410 22:50:30.961147   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:30.961154   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:30.961213   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:31.001374   57719 cri.go:89] found id: ""
	I0410 22:50:31.001412   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.001427   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:31.001434   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:31.001498   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:31.038928   57719 cri.go:89] found id: ""
	I0410 22:50:31.038961   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.038971   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:31.038980   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:31.039057   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:31.077033   57719 cri.go:89] found id: ""
	I0410 22:50:31.077067   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.077076   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:31.077083   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:31.077139   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:31.115227   57719 cri.go:89] found id: ""
	I0410 22:50:31.115257   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.115266   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:31.115273   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:31.115335   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:31.157339   57719 cri.go:89] found id: ""
	I0410 22:50:31.157372   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.157382   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:31.157393   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:31.157409   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:31.198742   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:31.198770   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:31.255388   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:31.255422   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:31.272018   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:31.272048   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:31.344503   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:31.344524   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:31.344541   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:33.925749   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:33.939402   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:33.939475   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:33.976070   57719 cri.go:89] found id: ""
	I0410 22:50:33.976093   57719 logs.go:276] 0 containers: []
	W0410 22:50:33.976100   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:33.976106   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:33.976172   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:34.013723   57719 cri.go:89] found id: ""
	I0410 22:50:34.013748   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.013758   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:34.013765   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:34.013821   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:34.062678   57719 cri.go:89] found id: ""
	I0410 22:50:34.062704   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.062712   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:34.062718   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:34.062774   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:34.123007   57719 cri.go:89] found id: ""
	I0410 22:50:34.123038   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.123046   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:34.123052   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:34.123096   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:34.188811   57719 cri.go:89] found id: ""
	I0410 22:50:34.188841   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.188852   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:34.188859   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:34.188949   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:34.223585   57719 cri.go:89] found id: ""
	I0410 22:50:34.223609   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.223618   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:34.223625   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:34.223680   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:34.260004   57719 cri.go:89] found id: ""
	I0410 22:50:34.260028   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.260036   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:34.260041   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:34.260096   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:34.303064   57719 cri.go:89] found id: ""
	I0410 22:50:34.303093   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.303104   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:34.303115   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:34.303134   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:34.359105   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:34.359142   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:34.375420   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:34.375450   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:34.449619   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:34.449645   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:34.449660   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:34.534214   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:34.534248   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:37.076525   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:37.090789   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:37.090849   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:37.130848   57719 cri.go:89] found id: ""
	I0410 22:50:37.130881   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.130893   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:37.130900   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:37.130967   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:37.170158   57719 cri.go:89] found id: ""
	I0410 22:50:37.170181   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.170188   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:37.170194   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:37.170269   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:37.210238   57719 cri.go:89] found id: ""
	I0410 22:50:37.210264   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.210274   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:37.210282   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:37.210328   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:37.256763   57719 cri.go:89] found id: ""
	I0410 22:50:37.256789   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.256800   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:37.256807   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:37.256875   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:37.295323   57719 cri.go:89] found id: ""
	I0410 22:50:37.295355   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.295364   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:37.295372   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:37.295443   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:37.334066   57719 cri.go:89] found id: ""
	I0410 22:50:37.334094   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.334105   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:37.334113   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:37.334170   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:37.374428   57719 cri.go:89] found id: ""
	I0410 22:50:37.374458   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.374477   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:37.374485   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:37.374544   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:37.412114   57719 cri.go:89] found id: ""
	I0410 22:50:37.412142   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.412152   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:37.412161   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:37.412174   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:37.453693   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:37.453717   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:37.505484   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:37.505524   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:37.523645   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:37.523672   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:37.595107   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:37.595134   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:37.595150   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:40.180649   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:40.195168   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:40.195243   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:40.240130   57719 cri.go:89] found id: ""
	I0410 22:50:40.240160   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.240169   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:40.240175   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:40.240241   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:40.276366   57719 cri.go:89] found id: ""
	I0410 22:50:40.276390   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.276406   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:40.276412   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:40.276466   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:40.314991   57719 cri.go:89] found id: ""
	I0410 22:50:40.315016   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.315023   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:40.315029   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:40.315075   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:40.354301   57719 cri.go:89] found id: ""
	I0410 22:50:40.354331   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.354342   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:40.354349   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:40.354414   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:40.393093   57719 cri.go:89] found id: ""
	I0410 22:50:40.393125   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.393135   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:40.393143   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:40.393204   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:40.429641   57719 cri.go:89] found id: ""
	I0410 22:50:40.429665   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.429674   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:40.429680   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:40.429727   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:40.468184   57719 cri.go:89] found id: ""
	I0410 22:50:40.468213   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.468224   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:40.468232   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:40.468304   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:40.505586   57719 cri.go:89] found id: ""
	I0410 22:50:40.505616   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.505627   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:40.505637   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:40.505652   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:40.562078   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:40.562119   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:40.578135   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:40.578213   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:40.659018   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:40.659047   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:40.659061   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:40.746434   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:40.746478   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:43.287852   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:43.301797   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:43.301869   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:43.339778   57719 cri.go:89] found id: ""
	I0410 22:50:43.339813   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.339822   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:43.339829   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:43.339893   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:43.378716   57719 cri.go:89] found id: ""
	I0410 22:50:43.378748   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.378759   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:43.378767   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:43.378836   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:43.417128   57719 cri.go:89] found id: ""
	I0410 22:50:43.417152   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.417163   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:43.417171   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:43.417234   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:43.459577   57719 cri.go:89] found id: ""
	I0410 22:50:43.459608   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.459617   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:43.459623   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:43.459678   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:43.497519   57719 cri.go:89] found id: ""
	I0410 22:50:43.497551   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.497561   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:43.497566   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:43.497620   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:43.534400   57719 cri.go:89] found id: ""
	I0410 22:50:43.534433   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.534444   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:43.534451   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:43.534540   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:43.574213   57719 cri.go:89] found id: ""
	I0410 22:50:43.574242   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.574253   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:43.574283   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:43.574344   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:43.611078   57719 cri.go:89] found id: ""
	I0410 22:50:43.611106   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.611113   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:43.611121   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:43.611137   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:43.698166   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:43.698202   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:43.749368   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:43.749395   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:43.801584   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:43.801621   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:43.817012   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:43.817050   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:43.892325   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:46.393325   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:46.407985   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:46.408045   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:46.442704   57719 cri.go:89] found id: ""
	I0410 22:50:46.442735   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.442745   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:46.442753   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:46.442821   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:46.485582   57719 cri.go:89] found id: ""
	I0410 22:50:46.485611   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.485618   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:46.485625   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:46.485683   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:46.524199   57719 cri.go:89] found id: ""
	I0410 22:50:46.524227   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.524234   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:46.524240   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:46.524288   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:46.560655   57719 cri.go:89] found id: ""
	I0410 22:50:46.560685   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.560694   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:46.560701   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:46.560839   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:46.596617   57719 cri.go:89] found id: ""
	I0410 22:50:46.596646   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.596658   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:46.596666   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:46.596739   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:46.634316   57719 cri.go:89] found id: ""
	I0410 22:50:46.634339   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.634347   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:46.634352   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:46.634399   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:46.671466   57719 cri.go:89] found id: ""
	I0410 22:50:46.671493   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.671502   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:46.671509   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:46.671582   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:46.709228   57719 cri.go:89] found id: ""
	I0410 22:50:46.709254   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.709265   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:46.709275   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:46.709291   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:46.761329   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:46.761366   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:46.778265   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:46.778288   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:46.851092   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:46.851113   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:46.851125   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:46.929181   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:46.929223   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:49.471285   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:49.485474   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:49.485551   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:49.523799   57719 cri.go:89] found id: ""
	I0410 22:50:49.523826   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.523838   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:49.523846   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:49.523899   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:49.562102   57719 cri.go:89] found id: ""
	I0410 22:50:49.562129   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.562137   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:49.562143   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:49.562196   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:49.600182   57719 cri.go:89] found id: ""
	I0410 22:50:49.600204   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.600211   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:49.600216   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:49.600262   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:49.640002   57719 cri.go:89] found id: ""
	I0410 22:50:49.640028   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.640039   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:49.640047   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:49.640111   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:49.678815   57719 cri.go:89] found id: ""
	I0410 22:50:49.678847   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.678858   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:49.678866   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:49.678929   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:49.716933   57719 cri.go:89] found id: ""
	I0410 22:50:49.716959   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.716969   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:49.716976   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:49.717039   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:49.756018   57719 cri.go:89] found id: ""
	I0410 22:50:49.756050   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.756060   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:49.756068   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:49.756132   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:49.802066   57719 cri.go:89] found id: ""
	I0410 22:50:49.802094   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.802103   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:49.802110   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:49.802123   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:49.856363   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:49.856417   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:49.872297   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:49.872330   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:49.950152   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:49.950174   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:49.950185   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:50.031251   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:50.031291   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:52.574794   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:52.589052   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:52.589117   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:52.625911   57719 cri.go:89] found id: ""
	I0410 22:50:52.625941   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.625952   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:52.625960   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:52.626020   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:52.668749   57719 cri.go:89] found id: ""
	I0410 22:50:52.668773   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.668781   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:52.668787   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:52.668835   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:52.713420   57719 cri.go:89] found id: ""
	I0410 22:50:52.713447   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.713457   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:52.713473   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:52.713538   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:52.750265   57719 cri.go:89] found id: ""
	I0410 22:50:52.750294   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.750301   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:52.750307   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:52.750354   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:52.787552   57719 cri.go:89] found id: ""
	I0410 22:50:52.787586   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.787597   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:52.787604   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:52.787670   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:52.827988   57719 cri.go:89] found id: ""
	I0410 22:50:52.828013   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.828020   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:52.828026   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:52.828072   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:52.864115   57719 cri.go:89] found id: ""
	I0410 22:50:52.864144   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.864155   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:52.864161   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:52.864222   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:52.906673   57719 cri.go:89] found id: ""
	I0410 22:50:52.906702   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.906712   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:52.906723   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:52.906742   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:52.960842   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:52.960892   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:52.976084   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:52.976114   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:53.052612   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:53.052638   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:53.052656   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:53.132465   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:53.132518   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:55.676947   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:55.691098   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:55.691183   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:55.728711   57719 cri.go:89] found id: ""
	I0410 22:50:55.728740   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.728750   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:55.728758   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:55.728824   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:55.768540   57719 cri.go:89] found id: ""
	I0410 22:50:55.768568   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.768578   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:55.768584   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:55.768649   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:55.806901   57719 cri.go:89] found id: ""
	I0410 22:50:55.806928   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.806938   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:55.806945   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:55.807019   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:55.846777   57719 cri.go:89] found id: ""
	I0410 22:50:55.846807   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.846816   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:55.846822   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:55.846873   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:55.887143   57719 cri.go:89] found id: ""
	I0410 22:50:55.887172   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.887181   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:55.887186   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:55.887241   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:55.929008   57719 cri.go:89] found id: ""
	I0410 22:50:55.929032   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.929040   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:55.929046   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:55.929098   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:55.969496   57719 cri.go:89] found id: ""
	I0410 22:50:55.969526   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.969536   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:55.969544   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:55.969605   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:56.007786   57719 cri.go:89] found id: ""
	I0410 22:50:56.007818   57719 logs.go:276] 0 containers: []
	W0410 22:50:56.007828   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:56.007838   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:56.007854   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:56.061616   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:56.061653   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:56.078664   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:56.078689   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:56.165015   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:56.165037   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:56.165053   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:56.241928   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:56.241971   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:58.785955   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:58.799544   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:58.799604   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:58.837234   57719 cri.go:89] found id: ""
	I0410 22:50:58.837264   57719 logs.go:276] 0 containers: []
	W0410 22:50:58.837275   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:58.837283   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:58.837350   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:58.877818   57719 cri.go:89] found id: ""
	I0410 22:50:58.877854   57719 logs.go:276] 0 containers: []
	W0410 22:50:58.877861   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:58.877867   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:58.877921   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:58.919705   57719 cri.go:89] found id: ""
	I0410 22:50:58.919729   57719 logs.go:276] 0 containers: []
	W0410 22:50:58.919740   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:58.919747   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:58.919809   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:58.957995   57719 cri.go:89] found id: ""
	I0410 22:50:58.958020   57719 logs.go:276] 0 containers: []
	W0410 22:50:58.958029   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:58.958036   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:58.958091   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:58.999966   57719 cri.go:89] found id: ""
	I0410 22:50:58.999995   57719 logs.go:276] 0 containers: []
	W0410 22:50:59.000008   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:59.000016   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:59.000088   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:59.040516   57719 cri.go:89] found id: ""
	I0410 22:50:59.040541   57719 logs.go:276] 0 containers: []
	W0410 22:50:59.040552   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:59.040560   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:59.040623   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:59.078869   57719 cri.go:89] found id: ""
	I0410 22:50:59.078899   57719 logs.go:276] 0 containers: []
	W0410 22:50:59.078908   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:59.078913   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:59.078961   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:59.116637   57719 cri.go:89] found id: ""
	I0410 22:50:59.116663   57719 logs.go:276] 0 containers: []
	W0410 22:50:59.116670   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:59.116679   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:59.116697   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:59.195852   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:59.195892   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:59.243256   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:59.243282   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:59.299195   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:59.299263   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:59.314512   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:59.314537   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:59.386468   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:01.886907   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:01.905169   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:01.905251   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:01.944154   57719 cri.go:89] found id: ""
	I0410 22:51:01.944187   57719 logs.go:276] 0 containers: []
	W0410 22:51:01.944198   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:01.944205   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:01.944268   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:01.982743   57719 cri.go:89] found id: ""
	I0410 22:51:01.982778   57719 logs.go:276] 0 containers: []
	W0410 22:51:01.982789   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:01.982797   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:01.982864   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:02.020072   57719 cri.go:89] found id: ""
	I0410 22:51:02.020094   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.020102   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:02.020159   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:02.020213   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:02.064250   57719 cri.go:89] found id: ""
	I0410 22:51:02.064273   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.064280   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:02.064286   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:02.064339   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:02.105013   57719 cri.go:89] found id: ""
	I0410 22:51:02.105045   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.105054   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:02.105060   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:02.105106   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:02.145664   57719 cri.go:89] found id: ""
	I0410 22:51:02.145689   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.145695   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:02.145701   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:02.145759   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:02.189752   57719 cri.go:89] found id: ""
	I0410 22:51:02.189831   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.189850   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:02.189857   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:02.189929   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:02.228315   57719 cri.go:89] found id: ""
	I0410 22:51:02.228347   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.228358   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:02.228374   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:02.228390   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:02.281425   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:02.281460   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:02.296003   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:02.296031   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:02.389572   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:02.389599   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:02.389613   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:02.475881   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:02.475916   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:05.022037   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:05.037242   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:05.037304   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:05.073656   57719 cri.go:89] found id: ""
	I0410 22:51:05.073687   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.073698   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:05.073705   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:05.073767   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:05.114321   57719 cri.go:89] found id: ""
	I0410 22:51:05.114348   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.114356   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:05.114361   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:05.114430   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:05.153119   57719 cri.go:89] found id: ""
	I0410 22:51:05.153156   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.153164   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:05.153170   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:05.153230   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:05.193393   57719 cri.go:89] found id: ""
	I0410 22:51:05.193420   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.193428   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:05.193433   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:05.193479   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:05.229826   57719 cri.go:89] found id: ""
	I0410 22:51:05.229853   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.229861   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:05.229867   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:05.229915   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:05.265511   57719 cri.go:89] found id: ""
	I0410 22:51:05.265544   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.265555   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:05.265562   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:05.265627   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:05.302257   57719 cri.go:89] found id: ""
	I0410 22:51:05.302287   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.302297   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:05.302305   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:05.302386   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:05.347344   57719 cri.go:89] found id: ""
	I0410 22:51:05.347372   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.347380   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:05.347388   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:05.347399   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:05.421796   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:05.421817   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:05.421829   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:05.501803   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:05.501839   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:05.549161   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:05.549195   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:05.599598   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:05.599633   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:08.115679   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:08.130273   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:08.130350   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:08.172302   57719 cri.go:89] found id: ""
	I0410 22:51:08.172328   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.172335   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:08.172342   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:08.172390   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:08.220789   57719 cri.go:89] found id: ""
	I0410 22:51:08.220812   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.220819   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:08.220825   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:08.220874   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:08.258299   57719 cri.go:89] found id: ""
	I0410 22:51:08.258328   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.258341   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:08.258349   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:08.258404   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:08.297698   57719 cri.go:89] found id: ""
	I0410 22:51:08.297726   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.297733   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:08.297739   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:08.297787   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:08.335564   57719 cri.go:89] found id: ""
	I0410 22:51:08.335595   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.335605   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:08.335613   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:08.335671   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:08.373340   57719 cri.go:89] found id: ""
	I0410 22:51:08.373367   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.373377   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:08.373384   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:08.373481   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:08.413961   57719 cri.go:89] found id: ""
	I0410 22:51:08.413984   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.413993   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:08.414001   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:08.414062   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:08.459449   57719 cri.go:89] found id: ""
	I0410 22:51:08.459481   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.459492   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:08.459505   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:08.459521   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:08.518061   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:08.518103   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:08.533653   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:08.533680   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:08.619882   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:08.619917   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:08.619932   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:08.696329   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:08.696364   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:11.256846   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:11.271521   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:11.271582   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:11.312829   57719 cri.go:89] found id: ""
	I0410 22:51:11.312851   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.312869   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:11.312876   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:11.312930   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:11.355183   57719 cri.go:89] found id: ""
	I0410 22:51:11.355210   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.355220   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:11.355227   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:11.355287   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:11.394345   57719 cri.go:89] found id: ""
	I0410 22:51:11.394376   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.394388   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:11.394396   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:11.394460   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:11.434128   57719 cri.go:89] found id: ""
	I0410 22:51:11.434155   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.434163   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:11.434169   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:11.434219   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:11.473160   57719 cri.go:89] found id: ""
	I0410 22:51:11.473189   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.473201   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:11.473208   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:11.473278   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:11.513782   57719 cri.go:89] found id: ""
	I0410 22:51:11.513815   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.513826   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:11.513835   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:11.513891   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:11.556057   57719 cri.go:89] found id: ""
	I0410 22:51:11.556085   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.556093   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:11.556100   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:11.556147   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:11.594557   57719 cri.go:89] found id: ""
	I0410 22:51:11.594579   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.594586   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:11.594594   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:11.594609   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:11.672795   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:11.672841   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:11.716011   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:11.716046   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:11.769372   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:11.769413   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:11.784589   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:11.784617   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:11.857051   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:14.358019   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:14.372116   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:14.372192   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:14.412020   57719 cri.go:89] found id: ""
	I0410 22:51:14.412049   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.412061   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:14.412068   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:14.412128   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:14.450317   57719 cri.go:89] found id: ""
	I0410 22:51:14.450349   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.450360   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:14.450368   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:14.450426   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:14.509080   57719 cri.go:89] found id: ""
	I0410 22:51:14.509104   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.509110   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:14.509116   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:14.509185   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:14.561540   57719 cri.go:89] found id: ""
	I0410 22:51:14.561572   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.561583   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:14.561590   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:14.561670   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:14.622498   57719 cri.go:89] found id: ""
	I0410 22:51:14.622528   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.622538   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:14.622546   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:14.622606   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:14.678451   57719 cri.go:89] found id: ""
	I0410 22:51:14.678481   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.678490   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:14.678498   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:14.678560   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:14.720264   57719 cri.go:89] found id: ""
	I0410 22:51:14.720302   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.720315   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:14.720323   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:14.720388   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:14.758039   57719 cri.go:89] found id: ""
	I0410 22:51:14.758063   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.758071   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:14.758079   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:14.758090   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:14.808111   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:14.808171   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:14.825444   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:14.825487   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:14.906859   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:14.906884   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:14.906899   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:14.995176   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:14.995225   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:17.541159   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:17.556679   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:17.556749   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:17.595839   57719 cri.go:89] found id: ""
	I0410 22:51:17.595869   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.595880   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:17.595895   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:17.595954   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:17.633921   57719 cri.go:89] found id: ""
	I0410 22:51:17.633947   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.633957   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:17.633964   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:17.634033   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:17.673467   57719 cri.go:89] found id: ""
	I0410 22:51:17.673493   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.673501   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:17.673507   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:17.673554   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:17.709631   57719 cri.go:89] found id: ""
	I0410 22:51:17.709660   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.709670   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:17.709679   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:17.709739   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:17.760852   57719 cri.go:89] found id: ""
	I0410 22:51:17.760880   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.760893   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:17.760908   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:17.760969   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:17.798074   57719 cri.go:89] found id: ""
	I0410 22:51:17.798099   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.798108   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:17.798117   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:17.798178   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:17.835807   57719 cri.go:89] found id: ""
	I0410 22:51:17.835839   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.835854   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:17.835863   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:17.835935   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:17.876812   57719 cri.go:89] found id: ""
	I0410 22:51:17.876846   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.876856   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:17.876868   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:17.876882   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:17.891121   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:17.891149   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:17.966241   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:17.966264   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:17.966277   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:18.042633   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:18.042667   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:18.088294   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:18.088327   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:20.647016   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:20.662573   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:20.662640   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:20.701147   57719 cri.go:89] found id: ""
	I0410 22:51:20.701173   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.701184   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:20.701191   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:20.701252   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:20.739005   57719 cri.go:89] found id: ""
	I0410 22:51:20.739038   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.739049   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:20.739057   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:20.739112   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:20.776335   57719 cri.go:89] found id: ""
	I0410 22:51:20.776365   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.776379   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:20.776386   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:20.776471   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:20.814755   57719 cri.go:89] found id: ""
	I0410 22:51:20.814789   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.814800   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:20.814808   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:20.814867   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:20.853872   57719 cri.go:89] found id: ""
	I0410 22:51:20.853897   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.853904   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:20.853910   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:20.853958   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:20.891616   57719 cri.go:89] found id: ""
	I0410 22:51:20.891648   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.891656   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:20.891662   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:20.891710   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:20.930285   57719 cri.go:89] found id: ""
	I0410 22:51:20.930316   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.930326   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:20.930341   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:20.930398   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:20.967857   57719 cri.go:89] found id: ""
	I0410 22:51:20.967894   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.967904   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:20.967913   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:20.967934   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:21.053166   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:21.053201   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:21.098860   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:21.098888   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:21.150395   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:21.150430   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:21.164707   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:21.164737   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:21.251010   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:23.751441   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:23.769949   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:23.770014   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:23.809652   57719 cri.go:89] found id: ""
	I0410 22:51:23.809678   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.809686   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:23.809692   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:23.809740   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:23.847331   57719 cri.go:89] found id: ""
	I0410 22:51:23.847364   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.847374   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:23.847383   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:23.847445   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:23.889459   57719 cri.go:89] found id: ""
	I0410 22:51:23.889488   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.889498   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:23.889505   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:23.889564   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:23.932683   57719 cri.go:89] found id: ""
	I0410 22:51:23.932712   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.932720   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:23.932727   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:23.932787   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:23.974161   57719 cri.go:89] found id: ""
	I0410 22:51:23.974187   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.974194   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:23.974200   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:23.974253   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:24.013058   57719 cri.go:89] found id: ""
	I0410 22:51:24.013087   57719 logs.go:276] 0 containers: []
	W0410 22:51:24.013098   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:24.013106   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:24.013169   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:24.052556   57719 cri.go:89] found id: ""
	I0410 22:51:24.052582   57719 logs.go:276] 0 containers: []
	W0410 22:51:24.052590   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:24.052596   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:24.052643   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:24.089940   57719 cri.go:89] found id: ""
	I0410 22:51:24.089967   57719 logs.go:276] 0 containers: []
	W0410 22:51:24.089974   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:24.089982   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:24.089992   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:24.133198   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:24.133226   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:24.186615   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:24.186651   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:24.200559   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:24.200586   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:24.277061   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:24.277093   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:24.277109   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:26.855354   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:26.870269   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:26.870329   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:26.910056   57719 cri.go:89] found id: ""
	I0410 22:51:26.910084   57719 logs.go:276] 0 containers: []
	W0410 22:51:26.910094   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:26.910101   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:26.910163   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:26.949646   57719 cri.go:89] found id: ""
	I0410 22:51:26.949674   57719 logs.go:276] 0 containers: []
	W0410 22:51:26.949684   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:26.949690   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:26.949759   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:26.990945   57719 cri.go:89] found id: ""
	I0410 22:51:26.990970   57719 logs.go:276] 0 containers: []
	W0410 22:51:26.990977   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:26.990984   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:26.991053   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:27.029464   57719 cri.go:89] found id: ""
	I0410 22:51:27.029491   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.029500   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:27.029505   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:27.029562   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:27.072194   57719 cri.go:89] found id: ""
	I0410 22:51:27.072235   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.072260   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:27.072270   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:27.072339   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:27.106942   57719 cri.go:89] found id: ""
	I0410 22:51:27.106969   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.106979   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:27.106985   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:27.107045   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:27.144851   57719 cri.go:89] found id: ""
	I0410 22:51:27.144885   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.144894   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:27.144909   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:27.144970   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:27.188138   57719 cri.go:89] found id: ""
	I0410 22:51:27.188166   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.188178   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:27.188189   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:27.188204   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:27.241911   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:27.241943   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:27.255296   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:27.255322   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:27.327638   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:27.327663   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:27.327678   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:27.409048   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:27.409083   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:29.960093   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:29.975583   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:29.975647   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:30.018120   57719 cri.go:89] found id: ""
	I0410 22:51:30.018149   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.018159   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:30.018166   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:30.018225   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:30.055487   57719 cri.go:89] found id: ""
	I0410 22:51:30.055511   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.055518   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:30.055524   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:30.055573   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:30.093723   57719 cri.go:89] found id: ""
	I0410 22:51:30.093749   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.093756   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:30.093761   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:30.093808   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:30.138278   57719 cri.go:89] found id: ""
	I0410 22:51:30.138306   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.138317   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:30.138324   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:30.138385   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:30.174454   57719 cri.go:89] found id: ""
	I0410 22:51:30.174484   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.174495   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:30.174502   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:30.174573   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:30.213189   57719 cri.go:89] found id: ""
	I0410 22:51:30.213214   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.213221   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:30.213227   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:30.213272   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:30.253264   57719 cri.go:89] found id: ""
	I0410 22:51:30.253294   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.253304   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:30.253309   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:30.253357   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:30.289729   57719 cri.go:89] found id: ""
	I0410 22:51:30.289755   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.289767   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:30.289777   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:30.289793   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:30.303387   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:30.303416   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:30.381294   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:30.381315   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:30.381331   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:30.468072   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:30.468110   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:30.508761   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:30.508794   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:33.061654   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:33.077072   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:33.077146   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:33.113753   57719 cri.go:89] found id: ""
	I0410 22:51:33.113781   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.113791   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:33.113798   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:33.113848   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:33.149212   57719 cri.go:89] found id: ""
	I0410 22:51:33.149238   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.149249   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:33.149256   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:33.149321   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:33.185619   57719 cri.go:89] found id: ""
	I0410 22:51:33.185649   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.185659   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:33.185667   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:33.185725   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:33.222270   57719 cri.go:89] found id: ""
	I0410 22:51:33.222301   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.222313   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:33.222320   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:33.222375   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:33.258594   57719 cri.go:89] found id: ""
	I0410 22:51:33.258624   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.258636   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:33.258642   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:33.258689   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:33.298326   57719 cri.go:89] found id: ""
	I0410 22:51:33.298360   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.298368   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:33.298374   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:33.298438   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:33.337407   57719 cri.go:89] found id: ""
	I0410 22:51:33.337438   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.337449   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:33.337456   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:33.337520   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:33.374971   57719 cri.go:89] found id: ""
	I0410 22:51:33.375003   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.375014   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:33.375024   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:33.375039   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:33.415256   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:33.415288   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:33.467895   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:33.467929   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:33.484604   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:33.484639   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:33.562267   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:33.562288   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:33.562299   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:36.142628   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:36.157825   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:36.157883   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:36.199418   57719 cri.go:89] found id: ""
	I0410 22:51:36.199446   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.199456   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:36.199463   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:36.199523   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:36.238136   57719 cri.go:89] found id: ""
	I0410 22:51:36.238166   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.238174   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:36.238180   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:36.238229   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:36.273995   57719 cri.go:89] found id: ""
	I0410 22:51:36.274026   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.274037   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:36.274049   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:36.274110   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:36.311007   57719 cri.go:89] found id: ""
	I0410 22:51:36.311039   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.311049   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:36.311057   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:36.311122   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:36.351062   57719 cri.go:89] found id: ""
	I0410 22:51:36.351086   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.351093   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:36.351099   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:36.351152   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:36.388660   57719 cri.go:89] found id: ""
	I0410 22:51:36.388689   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.388703   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:36.388711   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:36.388762   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:36.428715   57719 cri.go:89] found id: ""
	I0410 22:51:36.428753   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.428761   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:36.428767   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:36.428831   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:36.467186   57719 cri.go:89] found id: ""
	I0410 22:51:36.467213   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.467220   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:36.467228   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:36.467239   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:36.521831   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:36.521860   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:36.536929   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:36.536957   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:36.614624   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:36.614647   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:36.614659   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:36.694604   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:36.694646   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:39.240039   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:39.255177   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:39.255262   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:39.293063   57719 cri.go:89] found id: ""
	I0410 22:51:39.293091   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.293113   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:39.293120   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:39.293181   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:39.331603   57719 cri.go:89] found id: ""
	I0410 22:51:39.331631   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.331639   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:39.331645   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:39.331697   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:39.372881   57719 cri.go:89] found id: ""
	I0410 22:51:39.372908   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.372919   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:39.372926   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:39.372987   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:39.417399   57719 cri.go:89] found id: ""
	I0410 22:51:39.417425   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.417435   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:39.417442   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:39.417503   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:39.458836   57719 cri.go:89] found id: ""
	I0410 22:51:39.458868   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.458877   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:39.458882   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:39.458932   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:39.496436   57719 cri.go:89] found id: ""
	I0410 22:51:39.496460   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.496467   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:39.496474   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:39.496532   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:39.534649   57719 cri.go:89] found id: ""
	I0410 22:51:39.534681   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.534690   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:39.534695   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:39.534754   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:39.571677   57719 cri.go:89] found id: ""
	I0410 22:51:39.571698   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.571705   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:39.571714   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:39.571725   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:39.621445   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:39.621482   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:39.676341   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:39.676382   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:39.691543   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:39.691573   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:39.769452   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:39.769477   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:39.769493   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:42.350823   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:42.367124   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:42.367199   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:42.407511   57719 cri.go:89] found id: ""
	I0410 22:51:42.407545   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.407554   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:42.407560   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:42.407622   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:42.442913   57719 cri.go:89] found id: ""
	I0410 22:51:42.442948   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.442958   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:42.442964   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:42.443027   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:42.480747   57719 cri.go:89] found id: ""
	I0410 22:51:42.480777   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.480786   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:42.480792   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:42.480846   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:42.521610   57719 cri.go:89] found id: ""
	I0410 22:51:42.521635   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.521644   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:42.521651   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:42.521698   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:42.561076   57719 cri.go:89] found id: ""
	I0410 22:51:42.561108   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.561119   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:42.561127   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:42.561189   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:42.598034   57719 cri.go:89] found id: ""
	I0410 22:51:42.598059   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.598066   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:42.598072   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:42.598129   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:42.637051   57719 cri.go:89] found id: ""
	I0410 22:51:42.637085   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.637095   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:42.637103   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:42.637162   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:42.676051   57719 cri.go:89] found id: ""
	I0410 22:51:42.676084   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.676094   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:42.676105   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:42.676120   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:42.719607   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:42.719634   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:42.770791   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:42.770829   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:42.785704   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:42.785730   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:42.876445   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:42.876475   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:42.876490   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:45.458721   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:45.474125   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:45.474203   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:45.511105   57719 cri.go:89] found id: ""
	I0410 22:51:45.511143   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.511153   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:45.511161   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:45.511220   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:45.552891   57719 cri.go:89] found id: ""
	I0410 22:51:45.552916   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.552924   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:45.552930   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:45.552986   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:45.592423   57719 cri.go:89] found id: ""
	I0410 22:51:45.592458   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.592474   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:45.592481   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:45.592542   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:45.630964   57719 cri.go:89] found id: ""
	I0410 22:51:45.631009   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.631026   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:45.631033   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:45.631098   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:45.669557   57719 cri.go:89] found id: ""
	I0410 22:51:45.669586   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.669595   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:45.669602   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:45.669702   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:45.706359   57719 cri.go:89] found id: ""
	I0410 22:51:45.706387   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.706395   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:45.706402   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:45.706463   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:45.743301   57719 cri.go:89] found id: ""
	I0410 22:51:45.743330   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.743337   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:45.743343   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:45.743390   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:45.781679   57719 cri.go:89] found id: ""
	I0410 22:51:45.781703   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.781711   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:45.781718   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:45.781730   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:45.835251   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:45.835286   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:45.849255   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:45.849284   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:45.918404   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:45.918436   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:45.918452   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:45.999556   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:45.999591   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:48.546421   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:48.561243   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:48.561314   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:48.618335   57719 cri.go:89] found id: ""
	I0410 22:51:48.618361   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.618369   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:48.618375   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:48.618445   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:48.656116   57719 cri.go:89] found id: ""
	I0410 22:51:48.656151   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.656160   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:48.656167   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:48.656222   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:48.694846   57719 cri.go:89] found id: ""
	I0410 22:51:48.694874   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.694884   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:48.694897   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:48.694971   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:48.731988   57719 cri.go:89] found id: ""
	I0410 22:51:48.732020   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.732031   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:48.732039   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:48.732102   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:48.768595   57719 cri.go:89] found id: ""
	I0410 22:51:48.768627   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.768636   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:48.768643   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:48.768708   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:48.807263   57719 cri.go:89] found id: ""
	I0410 22:51:48.807292   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.807302   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:48.807308   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:48.807366   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:48.845291   57719 cri.go:89] found id: ""
	I0410 22:51:48.845317   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.845325   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:48.845329   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:48.845399   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:48.891056   57719 cri.go:89] found id: ""
	I0410 22:51:48.891081   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.891091   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:48.891102   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:48.891117   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:48.931963   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:48.931992   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:48.985539   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:48.985579   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:49.000685   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:49.000716   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:49.076097   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:49.076127   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:49.076143   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:51.663336   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:51.678249   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:51.678315   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:51.720062   57719 cri.go:89] found id: ""
	I0410 22:51:51.720088   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.720096   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:51.720103   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:51.720164   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:51.766351   57719 cri.go:89] found id: ""
	I0410 22:51:51.766387   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.766395   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:51.766401   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:51.766448   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:51.813037   57719 cri.go:89] found id: ""
	I0410 22:51:51.813068   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.813080   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:51.813087   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:51.813150   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:51.849232   57719 cri.go:89] found id: ""
	I0410 22:51:51.849262   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.849273   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:51.849280   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:51.849346   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:51.886392   57719 cri.go:89] found id: ""
	I0410 22:51:51.886415   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.886422   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:51.886428   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:51.886485   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:51.930859   57719 cri.go:89] found id: ""
	I0410 22:51:51.930896   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.930905   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:51.930913   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:51.930978   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:51.970403   57719 cri.go:89] found id: ""
	I0410 22:51:51.970501   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.970524   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:51.970533   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:51.970599   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:52.008281   57719 cri.go:89] found id: ""
	I0410 22:51:52.008311   57719 logs.go:276] 0 containers: []
	W0410 22:51:52.008322   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:52.008333   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:52.008347   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:52.060623   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:52.060656   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:52.075529   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:52.075559   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:52.158330   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:52.158356   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:52.158371   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:52.236356   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:52.236392   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:54.782448   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:54.796928   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:54.796997   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:54.836297   57719 cri.go:89] found id: ""
	I0410 22:51:54.836326   57719 logs.go:276] 0 containers: []
	W0410 22:51:54.836335   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:54.836341   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:54.836390   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:54.873501   57719 cri.go:89] found id: ""
	I0410 22:51:54.873532   57719 logs.go:276] 0 containers: []
	W0410 22:51:54.873540   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:54.873547   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:54.873617   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:54.914200   57719 cri.go:89] found id: ""
	I0410 22:51:54.914227   57719 logs.go:276] 0 containers: []
	W0410 22:51:54.914238   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:54.914247   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:54.914308   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:54.958654   57719 cri.go:89] found id: ""
	I0410 22:51:54.958682   57719 logs.go:276] 0 containers: []
	W0410 22:51:54.958693   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:54.958702   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:54.958761   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:55.017032   57719 cri.go:89] found id: ""
	I0410 22:51:55.017078   57719 logs.go:276] 0 containers: []
	W0410 22:51:55.017090   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:55.017101   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:55.017167   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:55.093024   57719 cri.go:89] found id: ""
	I0410 22:51:55.093059   57719 logs.go:276] 0 containers: []
	W0410 22:51:55.093070   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:55.093085   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:55.093156   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:55.142412   57719 cri.go:89] found id: ""
	I0410 22:51:55.142441   57719 logs.go:276] 0 containers: []
	W0410 22:51:55.142456   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:55.142464   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:55.142521   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:55.180116   57719 cri.go:89] found id: ""
	I0410 22:51:55.180147   57719 logs.go:276] 0 containers: []
	W0410 22:51:55.180159   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:55.180169   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:55.180186   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:55.249118   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:55.249139   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:55.249153   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:55.327558   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:55.327597   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:55.373127   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:55.373163   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:55.431602   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:55.431647   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:57.947559   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:57.962916   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:57.962983   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:58.000955   57719 cri.go:89] found id: ""
	I0410 22:51:58.000983   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.000990   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:58.000997   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:58.001049   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:58.040556   57719 cri.go:89] found id: ""
	I0410 22:51:58.040579   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.040586   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:58.040592   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:58.040649   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:58.079121   57719 cri.go:89] found id: ""
	I0410 22:51:58.079148   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.079155   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:58.079161   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:58.079240   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:58.119876   57719 cri.go:89] found id: ""
	I0410 22:51:58.119902   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.119914   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:58.119929   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:58.119987   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:58.160130   57719 cri.go:89] found id: ""
	I0410 22:51:58.160162   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.160173   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:58.160181   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:58.160258   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:58.198162   57719 cri.go:89] found id: ""
	I0410 22:51:58.198195   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.198207   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:58.198215   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:58.198266   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:58.235049   57719 cri.go:89] found id: ""
	I0410 22:51:58.235078   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.235089   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:58.235096   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:58.235157   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:58.275786   57719 cri.go:89] found id: ""
	I0410 22:51:58.275825   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.275845   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:58.275856   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:58.275872   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:58.316246   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:58.316277   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:58.371614   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:58.371649   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:58.386610   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:58.386646   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:58.465167   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:58.465187   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:58.465199   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:01.049405   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:01.073251   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:01.073328   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:01.125169   57719 cri.go:89] found id: ""
	I0410 22:52:01.125201   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.125212   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:01.125220   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:01.125289   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:01.171256   57719 cri.go:89] found id: ""
	I0410 22:52:01.171289   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.171300   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:01.171308   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:01.171376   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:01.210444   57719 cri.go:89] found id: ""
	I0410 22:52:01.210478   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.210489   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:01.210503   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:01.210568   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:01.252448   57719 cri.go:89] found id: ""
	I0410 22:52:01.252473   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.252480   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:01.252486   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:01.252531   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:01.293084   57719 cri.go:89] found id: ""
	I0410 22:52:01.293117   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.293128   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:01.293136   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:01.293208   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:01.330992   57719 cri.go:89] found id: ""
	I0410 22:52:01.331019   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.331026   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:01.331032   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:01.331081   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:01.369286   57719 cri.go:89] found id: ""
	I0410 22:52:01.369315   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.369325   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:01.369331   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:01.369378   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:01.409888   57719 cri.go:89] found id: ""
	I0410 22:52:01.409916   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.409924   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:01.409933   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:01.409944   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:01.484535   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:01.484557   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:01.484569   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:01.565727   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:01.565778   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:01.606987   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:01.607018   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:01.659492   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:01.659529   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:04.174971   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:04.190302   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:04.190382   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:04.230050   57719 cri.go:89] found id: ""
	I0410 22:52:04.230080   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.230090   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:04.230097   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:04.230162   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:04.269870   57719 cri.go:89] found id: ""
	I0410 22:52:04.269902   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.269908   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:04.269914   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:04.269969   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:04.310977   57719 cri.go:89] found id: ""
	I0410 22:52:04.311008   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.311019   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:04.311026   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:04.311096   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:04.349108   57719 cri.go:89] found id: ""
	I0410 22:52:04.349136   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.349147   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:04.349154   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:04.349216   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:04.389590   57719 cri.go:89] found id: ""
	I0410 22:52:04.389613   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.389625   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:04.389633   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:04.389697   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:04.432962   57719 cri.go:89] found id: ""
	I0410 22:52:04.432989   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.433001   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:04.433008   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:04.433070   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:04.473912   57719 cri.go:89] found id: ""
	I0410 22:52:04.473946   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.473955   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:04.473960   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:04.474029   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:04.516157   57719 cri.go:89] found id: ""
	I0410 22:52:04.516182   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.516192   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:04.516203   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:04.516218   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:04.569047   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:04.569082   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:04.622639   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:04.622673   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:04.638441   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:04.638470   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:04.718203   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:04.718227   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:04.718241   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:07.302147   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:07.315919   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:07.315984   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:07.354692   57719 cri.go:89] found id: ""
	I0410 22:52:07.354723   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.354733   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:07.354740   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:07.354803   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:07.393418   57719 cri.go:89] found id: ""
	I0410 22:52:07.393447   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.393459   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:07.393466   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:07.393525   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:07.436810   57719 cri.go:89] found id: ""
	I0410 22:52:07.436837   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.436847   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:07.436855   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:07.436920   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:07.478685   57719 cri.go:89] found id: ""
	I0410 22:52:07.478709   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.478720   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:07.478735   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:07.478792   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:07.515699   57719 cri.go:89] found id: ""
	I0410 22:52:07.515727   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.515737   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:07.515744   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:07.515805   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:07.556419   57719 cri.go:89] found id: ""
	I0410 22:52:07.556443   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.556451   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:07.556457   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:07.556560   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:07.598076   57719 cri.go:89] found id: ""
	I0410 22:52:07.598106   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.598113   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:07.598119   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:07.598183   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:07.637778   57719 cri.go:89] found id: ""
	I0410 22:52:07.637814   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.637826   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:07.637839   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:07.637854   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:07.693688   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:07.693728   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:07.709256   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:07.709289   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:07.778519   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:07.778544   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:07.778584   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:07.858937   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:07.858973   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:10.405765   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:10.422019   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:10.422083   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:10.463779   57719 cri.go:89] found id: ""
	I0410 22:52:10.463818   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.463829   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:10.463836   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:10.463923   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:10.503680   57719 cri.go:89] found id: ""
	I0410 22:52:10.503710   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.503718   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:10.503736   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:10.503804   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:10.545567   57719 cri.go:89] found id: ""
	I0410 22:52:10.545594   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.545605   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:10.545613   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:10.545671   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:10.590864   57719 cri.go:89] found id: ""
	I0410 22:52:10.590892   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.590901   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:10.590908   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:10.590968   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:10.634628   57719 cri.go:89] found id: ""
	I0410 22:52:10.634659   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.634670   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:10.634677   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:10.634758   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:10.681477   57719 cri.go:89] found id: ""
	I0410 22:52:10.681507   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.681526   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:10.681533   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:10.681585   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:10.725203   57719 cri.go:89] found id: ""
	I0410 22:52:10.725229   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.725328   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:10.725368   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:10.725443   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:10.764994   57719 cri.go:89] found id: ""
	I0410 22:52:10.765028   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.765036   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:10.765044   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:10.765094   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:10.808981   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:10.809012   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:10.866429   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:10.866468   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:10.882512   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:10.882537   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:10.963016   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:10.963041   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:10.963053   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:13.544552   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:13.558161   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:13.558238   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:13.596945   57719 cri.go:89] found id: ""
	I0410 22:52:13.596977   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.596988   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:13.596996   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:13.597057   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:13.637920   57719 cri.go:89] found id: ""
	I0410 22:52:13.637944   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.637951   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:13.637958   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:13.638012   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:13.676777   57719 cri.go:89] found id: ""
	I0410 22:52:13.676808   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.676819   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:13.676826   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:13.676887   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:13.714054   57719 cri.go:89] found id: ""
	I0410 22:52:13.714078   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.714086   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:13.714091   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:13.714142   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:13.757162   57719 cri.go:89] found id: ""
	I0410 22:52:13.757194   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.757206   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:13.757214   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:13.757276   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:13.793578   57719 cri.go:89] found id: ""
	I0410 22:52:13.793616   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.793629   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:13.793636   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:13.793697   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:13.831307   57719 cri.go:89] found id: ""
	I0410 22:52:13.831336   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.831346   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:13.831353   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:13.831400   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:13.872072   57719 cri.go:89] found id: ""
	I0410 22:52:13.872109   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.872117   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:13.872127   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:13.872143   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:13.926909   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:13.926947   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:13.943095   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:13.943126   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:14.015301   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:14.015336   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:14.015351   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:14.101100   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:14.101137   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:16.650213   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:16.664603   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:16.664677   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:16.701498   57719 cri.go:89] found id: ""
	I0410 22:52:16.701527   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.701539   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:16.701547   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:16.701618   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:16.740687   57719 cri.go:89] found id: ""
	I0410 22:52:16.740716   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.740725   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:16.740730   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:16.740789   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:16.777349   57719 cri.go:89] found id: ""
	I0410 22:52:16.777372   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.777380   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:16.777385   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:16.777454   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:16.819855   57719 cri.go:89] found id: ""
	I0410 22:52:16.819890   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.819900   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:16.819909   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:16.819973   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:16.859939   57719 cri.go:89] found id: ""
	I0410 22:52:16.859970   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.859981   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:16.859991   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:16.860056   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:16.897861   57719 cri.go:89] found id: ""
	I0410 22:52:16.897886   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.897893   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:16.897899   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:16.897962   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:16.935642   57719 cri.go:89] found id: ""
	I0410 22:52:16.935673   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.935681   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:16.935687   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:16.935733   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:16.974268   57719 cri.go:89] found id: ""
	I0410 22:52:16.974294   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.974302   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:16.974311   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:16.974327   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:17.027850   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:17.027888   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:17.043343   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:17.043379   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:17.120945   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:17.120967   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:17.120979   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:17.204831   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:17.204868   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:19.749712   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:19.764102   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:19.764181   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:19.800759   57719 cri.go:89] found id: ""
	I0410 22:52:19.800787   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.800795   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:19.800801   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:19.800851   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:19.839678   57719 cri.go:89] found id: ""
	I0410 22:52:19.839711   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.839723   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:19.839730   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:19.839791   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:19.876983   57719 cri.go:89] found id: ""
	I0410 22:52:19.877007   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.877015   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:19.877020   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:19.877081   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:19.918139   57719 cri.go:89] found id: ""
	I0410 22:52:19.918167   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.918177   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:19.918186   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:19.918243   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:19.954770   57719 cri.go:89] found id: ""
	I0410 22:52:19.954808   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.954818   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:19.954825   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:19.954881   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:19.993643   57719 cri.go:89] found id: ""
	I0410 22:52:19.993670   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.993680   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:19.993687   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:19.993746   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:20.030466   57719 cri.go:89] found id: ""
	I0410 22:52:20.030494   57719 logs.go:276] 0 containers: []
	W0410 22:52:20.030503   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:20.030510   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:20.030575   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:20.069264   57719 cri.go:89] found id: ""
	I0410 22:52:20.069291   57719 logs.go:276] 0 containers: []
	W0410 22:52:20.069299   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:20.069307   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:20.069318   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:20.117354   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:20.117382   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:20.170758   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:20.170800   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:20.187014   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:20.187055   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:20.269620   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:20.269645   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:20.269661   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:22.844841   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:22.861923   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:22.861983   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:22.907972   57719 cri.go:89] found id: ""
	I0410 22:52:22.908000   57719 logs.go:276] 0 containers: []
	W0410 22:52:22.908010   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:22.908017   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:22.908081   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:22.949822   57719 cri.go:89] found id: ""
	I0410 22:52:22.949851   57719 logs.go:276] 0 containers: []
	W0410 22:52:22.949861   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:22.949869   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:22.949935   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:22.989872   57719 cri.go:89] found id: ""
	I0410 22:52:22.989895   57719 logs.go:276] 0 containers: []
	W0410 22:52:22.989902   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:22.989908   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:22.989959   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:23.031881   57719 cri.go:89] found id: ""
	I0410 22:52:23.031900   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.031908   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:23.031913   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:23.031978   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:23.071691   57719 cri.go:89] found id: ""
	I0410 22:52:23.071719   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.071726   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:23.071732   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:23.071792   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:23.109961   57719 cri.go:89] found id: ""
	I0410 22:52:23.109990   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.110001   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:23.110009   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:23.110069   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:23.152955   57719 cri.go:89] found id: ""
	I0410 22:52:23.152979   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.152986   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:23.152991   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:23.153054   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:23.191883   57719 cri.go:89] found id: ""
	I0410 22:52:23.191924   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.191935   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:23.191947   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:23.191959   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:23.232692   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:23.232731   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:23.283648   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:23.283684   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:23.297701   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:23.297729   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:23.381657   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:23.381673   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:23.381685   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:25.961531   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:25.977539   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:25.977639   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:26.021844   57719 cri.go:89] found id: ""
	I0410 22:52:26.021875   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.021886   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:26.021893   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:26.021954   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:26.064286   57719 cri.go:89] found id: ""
	I0410 22:52:26.064316   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.064327   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:26.064335   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:26.064394   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:26.104381   57719 cri.go:89] found id: ""
	I0410 22:52:26.104426   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.104437   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:26.104445   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:26.104522   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:26.143382   57719 cri.go:89] found id: ""
	I0410 22:52:26.143407   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.143417   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:26.143424   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:26.143489   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:26.179609   57719 cri.go:89] found id: ""
	I0410 22:52:26.179635   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.179646   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:26.179652   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:26.179714   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:26.217660   57719 cri.go:89] found id: ""
	I0410 22:52:26.217689   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.217695   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:26.217701   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:26.217758   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:26.254914   57719 cri.go:89] found id: ""
	I0410 22:52:26.254946   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.254956   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:26.254963   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:26.255047   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:26.293738   57719 cri.go:89] found id: ""
	I0410 22:52:26.293769   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.293779   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:26.293790   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:26.293809   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:26.366700   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:26.366725   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:26.366741   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:26.445143   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:26.445183   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:26.493175   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:26.493203   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:26.554952   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:26.554992   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:29.072225   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:29.087075   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:29.087150   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:29.131314   57719 cri.go:89] found id: ""
	I0410 22:52:29.131345   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.131357   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:29.131365   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:29.131427   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:29.169263   57719 cri.go:89] found id: ""
	I0410 22:52:29.169289   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.169298   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:29.169304   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:29.169357   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:29.209535   57719 cri.go:89] found id: ""
	I0410 22:52:29.209559   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.209570   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:29.209575   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:29.209630   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:29.251172   57719 cri.go:89] found id: ""
	I0410 22:52:29.251225   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.251233   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:29.251238   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:29.251290   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:29.296142   57719 cri.go:89] found id: ""
	I0410 22:52:29.296169   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.296179   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:29.296185   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:29.296245   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:29.336910   57719 cri.go:89] found id: ""
	I0410 22:52:29.336933   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.336940   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:29.336946   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:29.337003   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:29.396332   57719 cri.go:89] found id: ""
	I0410 22:52:29.396371   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.396382   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:29.396390   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:29.396475   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:29.438301   57719 cri.go:89] found id: ""
	I0410 22:52:29.438332   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.438340   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:29.438348   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:29.438360   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:29.482687   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:29.482711   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:29.535115   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:29.535146   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:29.551736   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:29.551760   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:29.624162   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:29.624198   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:29.624213   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:32.204355   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:32.218239   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:32.218310   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:32.255412   57719 cri.go:89] found id: ""
	I0410 22:52:32.255440   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.255451   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:32.255458   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:32.255516   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:32.293553   57719 cri.go:89] found id: ""
	I0410 22:52:32.293580   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.293591   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:32.293604   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:32.293663   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:32.332814   57719 cri.go:89] found id: ""
	I0410 22:52:32.332846   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.332855   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:32.332862   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:32.332924   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:32.371312   57719 cri.go:89] found id: ""
	I0410 22:52:32.371347   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.371368   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:32.371376   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:32.371441   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:32.407630   57719 cri.go:89] found id: ""
	I0410 22:52:32.407652   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.407659   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:32.407664   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:32.407720   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:32.444878   57719 cri.go:89] found id: ""
	I0410 22:52:32.444904   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.444914   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:32.444923   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:32.444989   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:32.490540   57719 cri.go:89] found id: ""
	I0410 22:52:32.490567   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.490578   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:32.490586   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:32.490644   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:32.528911   57719 cri.go:89] found id: ""
	I0410 22:52:32.528953   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.528961   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:32.528969   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:32.528979   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:32.608601   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:32.608626   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:32.608641   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:32.684840   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:32.684876   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:32.728092   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:32.728132   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:32.778491   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:32.778524   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:35.296228   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:35.310615   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:35.310705   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:35.377585   57719 cri.go:89] found id: ""
	I0410 22:52:35.377612   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.377623   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:35.377632   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:35.377692   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:35.417734   57719 cri.go:89] found id: ""
	I0410 22:52:35.417775   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.417796   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:35.417803   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:35.417864   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:35.456256   57719 cri.go:89] found id: ""
	I0410 22:52:35.456281   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.456291   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:35.456298   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:35.456382   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:35.495233   57719 cri.go:89] found id: ""
	I0410 22:52:35.495257   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.495267   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:35.495274   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:35.495333   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:35.535239   57719 cri.go:89] found id: ""
	I0410 22:52:35.535273   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.535284   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:35.535292   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:35.535352   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:35.571601   57719 cri.go:89] found id: ""
	I0410 22:52:35.571628   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.571638   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:35.571645   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:35.571708   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:35.612008   57719 cri.go:89] found id: ""
	I0410 22:52:35.612036   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.612045   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:35.612051   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:35.612099   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:35.649029   57719 cri.go:89] found id: ""
	I0410 22:52:35.649057   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.649065   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:35.649073   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:35.649084   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:35.702630   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:35.702668   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:35.718404   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:35.718433   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:35.798380   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:35.798405   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:35.798420   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:35.874049   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:35.874085   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:38.416265   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:38.430921   57719 kubeadm.go:591] duration metric: took 4m3.090666464s to restartPrimaryControlPlane
	W0410 22:52:38.431006   57719 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0410 22:52:38.431030   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0410 22:52:41.138973   57719 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.707913754s)
	I0410 22:52:41.139063   57719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:52:41.155646   57719 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:52:41.166345   57719 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:52:41.176443   57719 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:52:41.176481   57719 kubeadm.go:156] found existing configuration files:
	
	I0410 22:52:41.176547   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:52:41.186887   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:52:41.186960   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:52:41.199740   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:52:41.209843   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:52:41.209901   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:52:41.219804   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:52:41.229739   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:52:41.229807   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:52:41.240127   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:52:41.249763   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:52:41.249824   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:52:41.260148   57719 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 22:52:41.334127   57719 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0410 22:52:41.334200   57719 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 22:52:41.506104   57719 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 22:52:41.506307   57719 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 22:52:41.506488   57719 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 22:52:41.715227   57719 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 22:52:41.717460   57719 out.go:204]   - Generating certificates and keys ...
	I0410 22:52:41.717564   57719 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 22:52:41.717654   57719 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 22:52:41.717781   57719 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0410 22:52:41.717898   57719 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0410 22:52:41.718004   57719 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0410 22:52:41.718099   57719 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0410 22:52:41.718203   57719 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0410 22:52:41.718550   57719 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0410 22:52:41.719083   57719 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0410 22:52:41.719413   57719 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0410 22:52:41.719571   57719 kubeadm.go:309] [certs] Using the existing "sa" key
	I0410 22:52:41.719675   57719 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 22:52:41.998202   57719 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 22:52:42.109508   57719 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 22:52:42.315545   57719 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 22:52:42.448910   57719 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 22:52:42.465903   57719 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 22:52:42.467312   57719 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 22:52:42.467387   57719 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 22:52:42.636790   57719 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 22:52:42.638969   57719 out.go:204]   - Booting up control plane ...
	I0410 22:52:42.639106   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 22:52:42.652152   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 22:52:42.653843   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 22:52:42.654719   57719 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 22:52:42.658006   57719 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0410 22:53:22.660165   57719 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0410 22:53:22.660260   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:53:22.660520   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:53:27.660705   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:53:27.660919   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:53:37.661409   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:53:37.661698   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:53:57.662444   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:53:57.662687   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:54:37.664290   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:54:37.664604   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:54:37.664634   57719 kubeadm.go:309] 
	I0410 22:54:37.664776   57719 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0410 22:54:37.664843   57719 kubeadm.go:309] 		timed out waiting for the condition
	I0410 22:54:37.664854   57719 kubeadm.go:309] 
	I0410 22:54:37.664901   57719 kubeadm.go:309] 	This error is likely caused by:
	I0410 22:54:37.664968   57719 kubeadm.go:309] 		- The kubelet is not running
	I0410 22:54:37.665086   57719 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0410 22:54:37.665101   57719 kubeadm.go:309] 
	I0410 22:54:37.665245   57719 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0410 22:54:37.665313   57719 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0410 22:54:37.665360   57719 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0410 22:54:37.665372   57719 kubeadm.go:309] 
	I0410 22:54:37.665579   57719 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0410 22:54:37.665695   57719 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0410 22:54:37.665707   57719 kubeadm.go:309] 
	I0410 22:54:37.665868   57719 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0410 22:54:37.666063   57719 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0410 22:54:37.666192   57719 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0410 22:54:37.666272   57719 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0410 22:54:37.666284   57719 kubeadm.go:309] 
	I0410 22:54:37.667202   57719 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 22:54:37.667329   57719 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0410 22:54:37.667420   57719 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0410 22:54:37.667555   57719 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0410 22:54:37.667623   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0410 22:54:43.156141   57719 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.488487447s)
	I0410 22:54:43.156227   57719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:54:43.170709   57719 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:54:43.180624   57719 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:54:43.180647   57719 kubeadm.go:156] found existing configuration files:
	
	I0410 22:54:43.180701   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:54:43.190482   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:54:43.190533   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:54:43.200261   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:54:43.210061   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:54:43.210116   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:54:43.220430   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:54:43.230810   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:54:43.230877   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:54:43.241141   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:54:43.251043   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:54:43.251111   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:54:43.261163   57719 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 22:54:43.534002   57719 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 22:56:40.435994   57719 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0410 22:56:40.436123   57719 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0410 22:56:40.437810   57719 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0410 22:56:40.437872   57719 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 22:56:40.437967   57719 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 22:56:40.438082   57719 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 22:56:40.438235   57719 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 22:56:40.438321   57719 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 22:56:40.440009   57719 out.go:204]   - Generating certificates and keys ...
	I0410 22:56:40.440110   57719 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 22:56:40.440210   57719 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 22:56:40.440336   57719 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0410 22:56:40.440417   57719 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0410 22:56:40.440501   57719 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0410 22:56:40.440563   57719 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0410 22:56:40.440622   57719 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0410 22:56:40.440685   57719 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0410 22:56:40.440752   57719 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0410 22:56:40.440858   57719 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0410 22:56:40.440923   57719 kubeadm.go:309] [certs] Using the existing "sa" key
	I0410 22:56:40.441004   57719 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 22:56:40.441076   57719 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 22:56:40.441131   57719 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 22:56:40.441185   57719 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 22:56:40.441242   57719 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 22:56:40.441375   57719 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 22:56:40.441501   57719 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 22:56:40.441565   57719 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 22:56:40.441658   57719 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 22:56:40.443122   57719 out.go:204]   - Booting up control plane ...
	I0410 22:56:40.443230   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 22:56:40.443332   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 22:56:40.443431   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 22:56:40.443549   57719 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 22:56:40.443710   57719 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0410 22:56:40.443783   57719 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0410 22:56:40.443883   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.444111   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.444200   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.444429   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.444520   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.444761   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.444869   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.445124   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.445235   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.445416   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.445423   57719 kubeadm.go:309] 
	I0410 22:56:40.445465   57719 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0410 22:56:40.445512   57719 kubeadm.go:309] 		timed out waiting for the condition
	I0410 22:56:40.445520   57719 kubeadm.go:309] 
	I0410 22:56:40.445548   57719 kubeadm.go:309] 	This error is likely caused by:
	I0410 22:56:40.445595   57719 kubeadm.go:309] 		- The kubelet is not running
	I0410 22:56:40.445712   57719 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0410 22:56:40.445722   57719 kubeadm.go:309] 
	I0410 22:56:40.445880   57719 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0410 22:56:40.445931   57719 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0410 22:56:40.445967   57719 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0410 22:56:40.445972   57719 kubeadm.go:309] 
	I0410 22:56:40.446095   57719 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0410 22:56:40.446190   57719 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0410 22:56:40.446201   57719 kubeadm.go:309] 
	I0410 22:56:40.446326   57719 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0410 22:56:40.446452   57719 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0410 22:56:40.446548   57719 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0410 22:56:40.446611   57719 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0410 22:56:40.446659   57719 kubeadm.go:309] 
	I0410 22:56:40.446681   57719 kubeadm.go:393] duration metric: took 8m5.163157284s to StartCluster
	I0410 22:56:40.446805   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:56:40.446880   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:56:40.499163   57719 cri.go:89] found id: ""
	I0410 22:56:40.499196   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.499205   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:56:40.499212   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:56:40.499292   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:56:40.545429   57719 cri.go:89] found id: ""
	I0410 22:56:40.545465   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.545473   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:56:40.545479   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:56:40.545538   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:56:40.583842   57719 cri.go:89] found id: ""
	I0410 22:56:40.583870   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.583880   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:56:40.583887   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:56:40.583957   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:56:40.621054   57719 cri.go:89] found id: ""
	I0410 22:56:40.621075   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.621083   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:56:40.621091   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:56:40.621149   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:56:40.665133   57719 cri.go:89] found id: ""
	I0410 22:56:40.665161   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.665168   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:56:40.665175   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:56:40.665231   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:56:40.707490   57719 cri.go:89] found id: ""
	I0410 22:56:40.707519   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.707529   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:56:40.707536   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:56:40.707598   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:56:40.748539   57719 cri.go:89] found id: ""
	I0410 22:56:40.748565   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.748576   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:56:40.748584   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:56:40.748644   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:56:40.792326   57719 cri.go:89] found id: ""
	I0410 22:56:40.792349   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.792358   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:56:40.792366   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:56:40.792376   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:56:40.844309   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:56:40.844346   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:56:40.859678   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:56:40.859715   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:56:40.950099   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:56:40.950123   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:56:40.950141   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:56:41.073547   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:56:41.073589   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0410 22:56:41.124970   57719 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0410 22:56:41.125024   57719 out.go:239] * 
	* 
	W0410 22:56:41.125096   57719 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0410 22:56:41.125129   57719 out.go:239] * 
	* 
	W0410 22:56:41.126153   57719 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0410 22:56:41.129869   57719 out.go:177] 
	W0410 22:56:41.131207   57719 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0410 22:56:41.131286   57719 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0410 22:56:41.131326   57719 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0410 22:56:41.133049   57719 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-862528 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-862528 -n old-k8s-version-862528
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-862528 -n old-k8s-version-862528: exit status 2 (256.76713ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-862528 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-862528 logs -n 25: (1.539671218s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| stop    | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC | 10 Apr 24 22:40 UTC |
	| start   | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC | 10 Apr 24 22:40 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.1                      |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-646133             | no-preload-646133            | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC | 10 Apr 24 22:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-646133                                   | no-preload-646133            | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC | 10 Apr 24 22:41 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.1                      |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:41 UTC | 10 Apr 24 22:41 UTC |
	| start   | -p embed-certs-706500                                  | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:41 UTC | 10 Apr 24 22:42 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-706500            | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:42 UTC | 10 Apr 24 22:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-706500                                  | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-862528        | old-k8s-version-862528       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:42 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-646133                  | no-preload-646133            | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| delete  | -p cert-expiration-464519                              | cert-expiration-464519       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:43 UTC |
	| delete  | -p                                                     | disable-driver-mounts-676292 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:43 UTC |
	|         | disable-driver-mounts-676292                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:44 UTC |
	|         | default-k8s-diff-port-519831                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| start   | -p no-preload-646133                                   | no-preload-646133            | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:55 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.1                      |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-862528                              | old-k8s-version-862528       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-862528             | old-k8s-version-862528       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC | 10 Apr 24 22:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-862528                              | old-k8s-version-862528       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-519831  | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC | 10 Apr 24 22:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC |                     |
	|         | default-k8s-diff-port-519831                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-706500                 | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-706500                                  | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC | 10 Apr 24 22:54 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-519831       | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:46 UTC | 10 Apr 24 22:53 UTC |
	|         | default-k8s-diff-port-519831                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/10 22:46:47
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0410 22:46:47.395706   58701 out.go:291] Setting OutFile to fd 1 ...
	I0410 22:46:47.395991   58701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:46:47.396002   58701 out.go:304] Setting ErrFile to fd 2...
	I0410 22:46:47.396019   58701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:46:47.396208   58701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 22:46:47.396802   58701 out.go:298] Setting JSON to false
	I0410 22:46:47.397726   58701 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5350,"bootTime":1712783858,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0410 22:46:47.397786   58701 start.go:139] virtualization: kvm guest
	I0410 22:46:47.400191   58701 out.go:177] * [default-k8s-diff-port-519831] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0410 22:46:47.401578   58701 notify.go:220] Checking for updates...
	I0410 22:46:47.402880   58701 out.go:177]   - MINIKUBE_LOCATION=18610
	I0410 22:46:47.404311   58701 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0410 22:46:47.405790   58701 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:46:47.407012   58701 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 22:46:47.408130   58701 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0410 22:46:47.409497   58701 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0410 22:46:47.411183   58701 config.go:182] Loaded profile config "default-k8s-diff-port-519831": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:46:47.411591   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:46:47.411632   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:46:47.426322   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42887
	I0410 22:46:47.426759   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:46:47.427345   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:46:47.427366   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:46:47.427716   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:46:47.427926   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:46:47.428221   58701 driver.go:392] Setting default libvirt URI to qemu:///system
	I0410 22:46:47.428646   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:46:47.428696   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:46:47.444105   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40099
	I0410 22:46:47.444537   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:46:47.445035   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:46:47.445058   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:46:47.445398   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:46:47.445592   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:46:47.480451   58701 out.go:177] * Using the kvm2 driver based on existing profile
	I0410 22:46:47.481837   58701 start.go:297] selected driver: kvm2
	I0410 22:46:47.481852   58701 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-519831 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-519831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.170 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:46:47.481985   58701 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0410 22:46:47.482657   58701 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 22:46:47.482750   58701 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18610-5679/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0410 22:46:47.498330   58701 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0410 22:46:47.498668   58701 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 22:46:47.498735   58701 cni.go:84] Creating CNI manager for ""
	I0410 22:46:47.498748   58701 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:46:47.498784   58701 start.go:340] cluster config:
	{Name:default-k8s-diff-port-519831 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-519831 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.170 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:46:47.498877   58701 iso.go:125] acquiring lock: {Name:mk5a268a8a4dbf02629446a098c1de60574dce10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 22:46:47.500723   58701 out.go:177] * Starting "default-k8s-diff-port-519831" primary control-plane node in "default-k8s-diff-port-519831" cluster
	I0410 22:46:47.180678   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:46:47.501967   58701 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 22:46:47.502009   58701 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0410 22:46:47.502030   58701 cache.go:56] Caching tarball of preloaded images
	I0410 22:46:47.502108   58701 preload.go:173] Found /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0410 22:46:47.502118   58701 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0410 22:46:47.502202   58701 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/config.json ...
	I0410 22:46:47.502366   58701 start.go:360] acquireMachinesLock for default-k8s-diff-port-519831: {Name:mkcfcfa8b01b63726303cd37d33f01d452118635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0410 22:46:50.252732   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:46:56.332647   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:46:59.404660   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:05.484717   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:08.556632   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:14.636753   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:17.708788   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:23.788661   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:26.860683   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:32.940630   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:36.012689   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:42.092749   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:45.164706   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:51.244682   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:54.316652   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:48:00.396637   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:48:03.468672   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:48:06.472768   57719 start.go:364] duration metric: took 4m5.937893783s to acquireMachinesLock for "old-k8s-version-862528"
	I0410 22:48:06.472833   57719 start.go:96] Skipping create...Using existing machine configuration
	I0410 22:48:06.472852   57719 fix.go:54] fixHost starting: 
	I0410 22:48:06.473157   57719 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:48:06.473186   57719 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:48:06.488728   57719 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41927
	I0410 22:48:06.489157   57719 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:48:06.489590   57719 main.go:141] libmachine: Using API Version  1
	I0410 22:48:06.489612   57719 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:48:06.490011   57719 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:48:06.490171   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:06.490337   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetState
	I0410 22:48:06.491997   57719 fix.go:112] recreateIfNeeded on old-k8s-version-862528: state=Stopped err=<nil>
	I0410 22:48:06.492030   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	W0410 22:48:06.492234   57719 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 22:48:06.493891   57719 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-862528" ...
	I0410 22:48:06.469869   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:48:06.469904   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetMachineName
	I0410 22:48:06.470235   57270 buildroot.go:166] provisioning hostname "no-preload-646133"
	I0410 22:48:06.470261   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetMachineName
	I0410 22:48:06.470529   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:48:06.472589   57270 machine.go:97] duration metric: took 4m35.561692081s to provisionDockerMachine
	I0410 22:48:06.472636   57270 fix.go:56] duration metric: took 4m35.586484815s for fixHost
	I0410 22:48:06.472646   57270 start.go:83] releasing machines lock for "no-preload-646133", held for 4m35.586540892s
	W0410 22:48:06.472671   57270 start.go:713] error starting host: provision: host is not running
	W0410 22:48:06.472773   57270 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0410 22:48:06.472785   57270 start.go:728] Will try again in 5 seconds ...
	I0410 22:48:06.495233   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .Start
	I0410 22:48:06.495416   57719 main.go:141] libmachine: (old-k8s-version-862528) Ensuring networks are active...
	I0410 22:48:06.496254   57719 main.go:141] libmachine: (old-k8s-version-862528) Ensuring network default is active
	I0410 22:48:06.496589   57719 main.go:141] libmachine: (old-k8s-version-862528) Ensuring network mk-old-k8s-version-862528 is active
	I0410 22:48:06.497002   57719 main.go:141] libmachine: (old-k8s-version-862528) Getting domain xml...
	I0410 22:48:06.497751   57719 main.go:141] libmachine: (old-k8s-version-862528) Creating domain...
	I0410 22:48:07.722703   57719 main.go:141] libmachine: (old-k8s-version-862528) Waiting to get IP...
	I0410 22:48:07.723942   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:07.724373   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:07.724451   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:07.724338   59021 retry.go:31] will retry after 284.455366ms: waiting for machine to come up
	I0410 22:48:08.011077   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:08.011598   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:08.011628   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:08.011545   59021 retry.go:31] will retry after 337.946102ms: waiting for machine to come up
	I0410 22:48:08.351219   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:08.351725   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:08.351744   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:08.351691   59021 retry.go:31] will retry after 454.774669ms: waiting for machine to come up
	I0410 22:48:08.808516   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:08.808953   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:08.808991   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:08.808893   59021 retry.go:31] will retry after 484.667282ms: waiting for machine to come up
	I0410 22:48:09.295665   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:09.296127   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:09.296148   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:09.296083   59021 retry.go:31] will retry after 515.00238ms: waiting for machine to come up
	I0410 22:48:09.812855   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:09.813337   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:09.813362   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:09.813289   59021 retry.go:31] will retry after 596.67118ms: waiting for machine to come up
	I0410 22:48:10.411103   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:10.411616   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:10.411640   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:10.411568   59021 retry.go:31] will retry after 1.035822512s: waiting for machine to come up
	I0410 22:48:11.473748   57270 start.go:360] acquireMachinesLock for no-preload-646133: {Name:mkcfcfa8b01b63726303cd37d33f01d452118635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0410 22:48:11.448894   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:11.449358   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:11.449388   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:11.449315   59021 retry.go:31] will retry after 1.258446774s: waiting for machine to come up
	I0410 22:48:12.709048   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:12.709587   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:12.709618   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:12.709530   59021 retry.go:31] will retry after 1.149380432s: waiting for machine to come up
	I0410 22:48:13.860550   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:13.861084   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:13.861110   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:13.861028   59021 retry.go:31] will retry after 1.733388735s: waiting for machine to come up
	I0410 22:48:15.595870   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:15.596447   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:15.596487   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:15.596343   59021 retry.go:31] will retry after 2.536794123s: waiting for machine to come up
	I0410 22:48:18.135592   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:18.136099   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:18.136128   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:18.136056   59021 retry.go:31] will retry after 3.390395523s: waiting for machine to come up
	I0410 22:48:21.528518   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:21.528964   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:21.529008   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:21.528906   59021 retry.go:31] will retry after 4.165145769s: waiting for machine to come up
	I0410 22:48:26.977460   58186 start.go:364] duration metric: took 3m29.815175662s to acquireMachinesLock for "embed-certs-706500"
	I0410 22:48:26.977524   58186 start.go:96] Skipping create...Using existing machine configuration
	I0410 22:48:26.977532   58186 fix.go:54] fixHost starting: 
	I0410 22:48:26.977935   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:48:26.977965   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:48:26.994175   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36291
	I0410 22:48:26.994552   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:48:26.995016   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:48:26.995040   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:48:26.995447   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:48:26.995652   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:26.995826   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetState
	I0410 22:48:26.997547   58186 fix.go:112] recreateIfNeeded on embed-certs-706500: state=Stopped err=<nil>
	I0410 22:48:26.997580   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	W0410 22:48:26.997902   58186 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 22:48:27.000500   58186 out.go:177] * Restarting existing kvm2 VM for "embed-certs-706500" ...
	I0410 22:48:27.002204   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Start
	I0410 22:48:27.002398   58186 main.go:141] libmachine: (embed-certs-706500) Ensuring networks are active...
	I0410 22:48:27.003133   58186 main.go:141] libmachine: (embed-certs-706500) Ensuring network default is active
	I0410 22:48:27.003465   58186 main.go:141] libmachine: (embed-certs-706500) Ensuring network mk-embed-certs-706500 is active
	I0410 22:48:27.003863   58186 main.go:141] libmachine: (embed-certs-706500) Getting domain xml...
	I0410 22:48:27.004603   58186 main.go:141] libmachine: (embed-certs-706500) Creating domain...
	I0410 22:48:25.699595   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.700129   57719 main.go:141] libmachine: (old-k8s-version-862528) Found IP for machine: 192.168.61.178
	I0410 22:48:25.700159   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has current primary IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.700166   57719 main.go:141] libmachine: (old-k8s-version-862528) Reserving static IP address...
	I0410 22:48:25.700654   57719 main.go:141] libmachine: (old-k8s-version-862528) Reserved static IP address: 192.168.61.178
	I0410 22:48:25.700676   57719 main.go:141] libmachine: (old-k8s-version-862528) Waiting for SSH to be available...
	I0410 22:48:25.700704   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "old-k8s-version-862528", mac: "52:54:00:d0:b7:c9", ip: "192.168.61.178"} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.700732   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | skip adding static IP to network mk-old-k8s-version-862528 - found existing host DHCP lease matching {name: "old-k8s-version-862528", mac: "52:54:00:d0:b7:c9", ip: "192.168.61.178"}
	I0410 22:48:25.700745   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | Getting to WaitForSSH function...
	I0410 22:48:25.702929   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.703290   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.703322   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.703490   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | Using SSH client type: external
	I0410 22:48:25.703519   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | Using SSH private key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa (-rw-------)
	I0410 22:48:25.703551   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.178 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0410 22:48:25.703590   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | About to run SSH command:
	I0410 22:48:25.703635   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | exit 0
	I0410 22:48:25.832738   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | SSH cmd err, output: <nil>: 
	I0410 22:48:25.833133   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetConfigRaw
	I0410 22:48:25.833784   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetIP
	I0410 22:48:25.836323   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.836874   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.836908   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.837156   57719 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/config.json ...
	I0410 22:48:25.837472   57719 machine.go:94] provisionDockerMachine start ...
	I0410 22:48:25.837502   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:25.837710   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:25.840159   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.840488   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.840516   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.840593   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:25.840815   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:25.840992   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:25.841134   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:25.841337   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:25.841543   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:25.841556   57719 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 22:48:25.957153   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0410 22:48:25.957189   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetMachineName
	I0410 22:48:25.957438   57719 buildroot.go:166] provisioning hostname "old-k8s-version-862528"
	I0410 22:48:25.957461   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetMachineName
	I0410 22:48:25.957679   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:25.960779   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.961149   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.961184   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.961332   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:25.961546   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:25.961689   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:25.961864   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:25.962020   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:25.962196   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:25.962207   57719 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-862528 && echo "old-k8s-version-862528" | sudo tee /etc/hostname
	I0410 22:48:26.087073   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-862528
	
	I0410 22:48:26.087099   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.089770   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.090109   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.090140   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.090261   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.090446   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.090623   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.090760   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.090951   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:26.091131   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:26.091155   57719 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-862528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-862528/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-862528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:48:26.214422   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:48:26.214462   57719 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:48:26.214490   57719 buildroot.go:174] setting up certificates
	I0410 22:48:26.214498   57719 provision.go:84] configureAuth start
	I0410 22:48:26.214509   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetMachineName
	I0410 22:48:26.214793   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetIP
	I0410 22:48:26.217463   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.217809   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.217850   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.217975   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.219971   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.220235   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.220265   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.220480   57719 provision.go:143] copyHostCerts
	I0410 22:48:26.220526   57719 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:48:26.220542   57719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:48:26.220604   57719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:48:26.220703   57719 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:48:26.220712   57719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:48:26.220736   57719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:48:26.220789   57719 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:48:26.220796   57719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:48:26.220817   57719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:48:26.220864   57719 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-862528 san=[127.0.0.1 192.168.61.178 localhost minikube old-k8s-version-862528]
	I0410 22:48:26.288372   57719 provision.go:177] copyRemoteCerts
	I0410 22:48:26.288445   57719 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:48:26.288468   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.290980   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.291298   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.291339   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.291444   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.291635   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.291809   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.291927   57719 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa Username:docker}
	I0410 22:48:26.379823   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:48:26.405285   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0410 22:48:26.430122   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0410 22:48:26.456124   57719 provision.go:87] duration metric: took 241.614364ms to configureAuth
	I0410 22:48:26.456154   57719 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:48:26.456356   57719 config.go:182] Loaded profile config "old-k8s-version-862528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0410 22:48:26.456480   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.459028   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.459335   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.459366   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.459558   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.459742   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.459888   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.460037   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.460211   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:26.460379   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:26.460413   57719 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:48:26.732588   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:48:26.732614   57719 machine.go:97] duration metric: took 895.122467ms to provisionDockerMachine
	I0410 22:48:26.732627   57719 start.go:293] postStartSetup for "old-k8s-version-862528" (driver="kvm2")
	I0410 22:48:26.732641   57719 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:48:26.732679   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.733014   57719 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:48:26.733044   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.735820   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.736217   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.736244   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.736418   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.736630   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.736840   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.737020   57719 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa Username:docker}
	I0410 22:48:26.823452   57719 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:48:26.827806   57719 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:48:26.827827   57719 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:48:26.827899   57719 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:48:26.828009   57719 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:48:26.828122   57719 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:48:26.837564   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:48:26.862278   57719 start.go:296] duration metric: took 129.638185ms for postStartSetup
	I0410 22:48:26.862325   57719 fix.go:56] duration metric: took 20.389482643s for fixHost
	I0410 22:48:26.862346   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.864911   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.865277   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.865301   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.865419   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.865597   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.865742   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.865872   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.866083   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:26.866283   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:26.866300   57719 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 22:48:26.977317   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712789306.948982315
	
	I0410 22:48:26.977337   57719 fix.go:216] guest clock: 1712789306.948982315
	I0410 22:48:26.977344   57719 fix.go:229] Guest: 2024-04-10 22:48:26.948982315 +0000 UTC Remote: 2024-04-10 22:48:26.862329953 +0000 UTC m=+266.486936912 (delta=86.652362ms)
	I0410 22:48:26.977362   57719 fix.go:200] guest clock delta is within tolerance: 86.652362ms
	I0410 22:48:26.977366   57719 start.go:83] releasing machines lock for "old-k8s-version-862528", held for 20.504554043s
	I0410 22:48:26.977386   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.977653   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetIP
	I0410 22:48:26.980035   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.980376   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.980419   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.980602   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.981224   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.981421   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.981516   57719 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:48:26.981558   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.981645   57719 ssh_runner.go:195] Run: cat /version.json
	I0410 22:48:26.981670   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.984375   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.984568   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.984840   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.984868   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.984953   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.985030   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.985079   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.985118   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.985236   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.985277   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.985374   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.985450   57719 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa Username:docker}
	I0410 22:48:26.985516   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.985635   57719 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa Username:docker}
	I0410 22:48:27.105002   57719 ssh_runner.go:195] Run: systemctl --version
	I0410 22:48:27.111205   57719 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:48:27.261678   57719 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 22:48:27.268336   57719 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:48:27.268423   57719 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:48:27.290099   57719 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0410 22:48:27.290122   57719 start.go:494] detecting cgroup driver to use...
	I0410 22:48:27.290174   57719 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:48:27.308787   57719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:48:27.325557   57719 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:48:27.325611   57719 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:48:27.340859   57719 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:48:27.355570   57719 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:48:27.479670   57719 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:48:27.653364   57719 docker.go:233] disabling docker service ...
	I0410 22:48:27.653424   57719 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:48:27.669775   57719 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:48:27.683654   57719 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:48:27.813212   57719 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:48:27.929620   57719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:48:27.946085   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:48:27.966341   57719 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0410 22:48:27.966404   57719 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:27.978022   57719 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:48:27.978111   57719 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:27.989324   57719 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:28.001429   57719 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:28.012965   57719 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:48:28.024663   57719 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:48:28.034362   57719 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0410 22:48:28.034423   57719 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0410 22:48:28.048740   57719 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:48:28.060698   57719 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:48:28.188526   57719 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:48:28.348442   57719 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 22:48:28.348523   57719 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 22:48:28.353501   57719 start.go:562] Will wait 60s for crictl version
	I0410 22:48:28.353566   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:28.357486   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 22:48:28.391138   57719 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 22:48:28.391221   57719 ssh_runner.go:195] Run: crio --version
	I0410 22:48:28.421399   57719 ssh_runner.go:195] Run: crio --version
	I0410 22:48:28.455851   57719 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0410 22:48:28.457534   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetIP
	I0410 22:48:28.460913   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:28.461297   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:28.461323   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:28.461558   57719 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0410 22:48:28.466450   57719 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:48:28.480549   57719 kubeadm.go:877] updating cluster {Name:old-k8s-version-862528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-862528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.178 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 22:48:28.480671   57719 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0410 22:48:28.480775   57719 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:48:28.536971   57719 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0410 22:48:28.537034   57719 ssh_runner.go:195] Run: which lz4
	I0410 22:48:28.541757   57719 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0410 22:48:28.546381   57719 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0410 22:48:28.546413   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0410 22:48:30.411805   57719 crio.go:462] duration metric: took 1.870076139s to copy over tarball
	I0410 22:48:30.411900   57719 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0410 22:48:28.229217   58186 main.go:141] libmachine: (embed-certs-706500) Waiting to get IP...
	I0410 22:48:28.230257   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:28.230673   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:28.230724   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:28.230643   59155 retry.go:31] will retry after 262.296498ms: waiting for machine to come up
	I0410 22:48:28.494117   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:28.494631   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:28.494660   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:28.494584   59155 retry.go:31] will retry after 237.287095ms: waiting for machine to come up
	I0410 22:48:28.733250   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:28.733795   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:28.733817   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:28.733755   59155 retry.go:31] will retry after 387.436239ms: waiting for machine to come up
	I0410 22:48:29.123585   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:29.124128   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:29.124163   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:29.124073   59155 retry.go:31] will retry after 428.418916ms: waiting for machine to come up
	I0410 22:48:29.554781   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:29.555244   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:29.555285   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:29.555235   59155 retry.go:31] will retry after 683.194159ms: waiting for machine to come up
	I0410 22:48:30.239955   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:30.240385   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:30.240463   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:30.240365   59155 retry.go:31] will retry after 764.240086ms: waiting for machine to come up
	I0410 22:48:31.006294   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:31.006789   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:31.006816   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:31.006750   59155 retry.go:31] will retry after 1.113674235s: waiting for machine to come up
	I0410 22:48:33.358026   57719 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.946092727s)
	I0410 22:48:33.358059   57719 crio.go:469] duration metric: took 2.946222933s to extract the tarball
	I0410 22:48:33.358069   57719 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0410 22:48:33.402924   57719 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:48:33.441006   57719 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0410 22:48:33.441033   57719 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0410 22:48:33.441090   57719 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:48:33.441142   57719 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.441203   57719 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.441210   57719 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.441318   57719 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0410 22:48:33.441339   57719 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.441375   57719 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.441395   57719 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.442645   57719 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.442667   57719 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.442706   57719 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.442717   57719 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0410 22:48:33.442796   57719 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.442807   57719 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.442814   57719 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:48:33.442866   57719 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.651119   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.652634   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.665548   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.669396   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.672510   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.674137   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0410 22:48:33.686915   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.756592   57719 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0410 22:48:33.756639   57719 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.756696   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.756696   57719 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0410 22:48:33.756789   57719 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.756810   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867043   57719 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0410 22:48:33.867061   57719 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0410 22:48:33.867090   57719 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.867091   57719 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.867135   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867166   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867185   57719 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0410 22:48:33.867220   57719 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.867252   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867261   57719 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0410 22:48:33.867303   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.867311   57719 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0410 22:48:33.867355   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867359   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.867286   57719 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0410 22:48:33.867452   57719 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.867481   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.871719   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.881086   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.964827   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.964854   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0410 22:48:33.964932   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0410 22:48:33.964948   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.976084   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0410 22:48:33.976155   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0410 22:48:33.976205   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0410 22:48:34.011460   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0410 22:48:34.038739   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0410 22:48:34.038739   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0410 22:48:34.289751   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:48:34.429542   57719 cache_images.go:92] duration metric: took 988.487885ms to LoadCachedImages
	W0410 22:48:34.429636   57719 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0410 22:48:34.429665   57719 kubeadm.go:928] updating node { 192.168.61.178 8443 v1.20.0 crio true true} ...
	I0410 22:48:34.429782   57719 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-862528 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.178
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-862528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 22:48:34.429870   57719 ssh_runner.go:195] Run: crio config
	I0410 22:48:34.478794   57719 cni.go:84] Creating CNI manager for ""
	I0410 22:48:34.478829   57719 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:48:34.478845   57719 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 22:48:34.478868   57719 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.178 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-862528 NodeName:old-k8s-version-862528 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.178"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.178 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0410 22:48:34.479065   57719 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.178
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-862528"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.178
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.178"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 22:48:34.479147   57719 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0410 22:48:34.489950   57719 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 22:48:34.490007   57719 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 22:48:34.500261   57719 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0410 22:48:34.517530   57719 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0410 22:48:34.534814   57719 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0410 22:48:34.552669   57719 ssh_runner.go:195] Run: grep 192.168.61.178	control-plane.minikube.internal$ /etc/hosts
	I0410 22:48:34.556612   57719 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.178	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:48:34.569643   57719 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:48:34.700791   57719 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:48:34.719682   57719 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528 for IP: 192.168.61.178
	I0410 22:48:34.719703   57719 certs.go:194] generating shared ca certs ...
	I0410 22:48:34.719722   57719 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:48:34.719900   57719 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 22:48:34.719951   57719 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 22:48:34.719965   57719 certs.go:256] generating profile certs ...
	I0410 22:48:34.720091   57719 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/client.key
	I0410 22:48:34.720155   57719 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.key.a46c310c
	I0410 22:48:34.720199   57719 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/proxy-client.key
	I0410 22:48:34.720337   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 22:48:34.720376   57719 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 22:48:34.720386   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 22:48:34.720438   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 22:48:34.720472   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 22:48:34.720502   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 22:48:34.720557   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:48:34.721238   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 22:48:34.769810   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 22:48:34.805397   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 22:48:34.846743   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 22:48:34.888720   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0410 22:48:34.915958   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0410 22:48:34.962182   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 22:48:34.992444   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0410 22:48:35.023525   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 22:48:35.051098   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 22:48:35.077305   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 22:48:35.102172   57719 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 22:48:35.121381   57719 ssh_runner.go:195] Run: openssl version
	I0410 22:48:35.127869   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 22:48:35.140056   57719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:35.145172   57719 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:35.145242   57719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:35.152081   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 22:48:35.164621   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 22:48:35.176511   57719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 22:48:35.182164   57719 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:48:35.182217   57719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 22:48:35.188968   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 22:48:35.201491   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 22:48:35.213468   57719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 22:48:35.218519   57719 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:48:35.218586   57719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 22:48:35.224872   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 22:48:35.236964   57719 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:48:35.242262   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0410 22:48:35.249245   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0410 22:48:35.256301   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0410 22:48:35.263359   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0410 22:48:35.270166   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0410 22:48:35.276953   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0410 22:48:35.283529   57719 kubeadm.go:391] StartCluster: {Name:old-k8s-version-862528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-862528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.178 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:48:35.283643   57719 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 22:48:35.283700   57719 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:48:35.328461   57719 cri.go:89] found id: ""
	I0410 22:48:35.328532   57719 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0410 22:48:35.340207   57719 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0410 22:48:35.340235   57719 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0410 22:48:35.340245   57719 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0410 22:48:35.340293   57719 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0410 22:48:35.351212   57719 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0410 22:48:35.352189   57719 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-862528" does not appear in /home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:48:35.352989   57719 kubeconfig.go:62] /home/jenkins/minikube-integration/18610-5679/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-862528" cluster setting kubeconfig missing "old-k8s-version-862528" context setting]
	I0410 22:48:35.353956   57719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/kubeconfig: {Name:mkd6f498bec10d7b0d2291f83f5e27766227bc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:48:32.122313   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:32.122773   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:32.122816   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:32.122717   59155 retry.go:31] will retry after 1.052378413s: waiting for machine to come up
	I0410 22:48:33.176207   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:33.176621   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:33.176665   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:33.176568   59155 retry.go:31] will retry after 1.548572633s: waiting for machine to come up
	I0410 22:48:34.726554   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:34.726992   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:34.727020   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:34.726938   59155 retry.go:31] will retry after 1.800911659s: waiting for machine to come up
	I0410 22:48:36.529629   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:36.530133   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:36.530164   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:36.530085   59155 retry.go:31] will retry after 2.434743044s: waiting for machine to come up
	I0410 22:48:35.428830   57719 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0410 22:48:35.479813   57719 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.178
	I0410 22:48:35.479853   57719 kubeadm.go:1154] stopping kube-system containers ...
	I0410 22:48:35.479882   57719 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0410 22:48:35.479940   57719 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:48:35.520506   57719 cri.go:89] found id: ""
	I0410 22:48:35.520577   57719 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0410 22:48:35.538167   57719 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:48:35.548571   57719 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:48:35.548600   57719 kubeadm.go:156] found existing configuration files:
	
	I0410 22:48:35.548662   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:48:35.558559   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:48:35.558612   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:48:35.568950   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:48:35.578644   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:48:35.578712   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:48:35.589075   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:48:35.600265   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:48:35.600321   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:48:35.611459   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:48:35.621712   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:48:35.621785   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:48:35.632133   57719 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:48:35.643494   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:35.775309   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:37.133286   57719 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.35793645s)
	I0410 22:48:37.133334   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:37.368687   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:37.497136   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:37.584652   57719 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:48:37.584744   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:38.085293   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:38.585489   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:39.084919   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:39.584951   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:40.085144   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:38.966866   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:38.967360   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:38.967383   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:38.967339   59155 retry.go:31] will retry after 3.219302627s: waiting for machine to come up
	I0410 22:48:40.585356   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:41.084839   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:41.585434   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:42.085797   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:42.585578   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:43.085621   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:43.585581   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:44.085251   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:44.584785   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:45.085394   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:46.409467   58701 start.go:364] duration metric: took 1m58.907071516s to acquireMachinesLock for "default-k8s-diff-port-519831"
	I0410 22:48:46.409536   58701 start.go:96] Skipping create...Using existing machine configuration
	I0410 22:48:46.409557   58701 fix.go:54] fixHost starting: 
	I0410 22:48:46.410030   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:48:46.410080   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:48:46.427877   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35587
	I0410 22:48:46.428357   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:48:46.428836   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:48:46.428858   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:48:46.429163   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:48:46.429354   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:48:46.429494   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetState
	I0410 22:48:46.431151   58701 fix.go:112] recreateIfNeeded on default-k8s-diff-port-519831: state=Stopped err=<nil>
	I0410 22:48:46.431192   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	W0410 22:48:46.431372   58701 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 22:48:46.433597   58701 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-519831" ...
	I0410 22:48:42.187835   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:42.188266   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:42.188305   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:42.188191   59155 retry.go:31] will retry after 2.924293511s: waiting for machine to come up
	I0410 22:48:45.113669   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.114211   58186 main.go:141] libmachine: (embed-certs-706500) Found IP for machine: 192.168.39.10
	I0410 22:48:45.114229   58186 main.go:141] libmachine: (embed-certs-706500) Reserving static IP address...
	I0410 22:48:45.114243   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has current primary IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.114685   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "embed-certs-706500", mac: "52:54:00:36:c4:8c", ip: "192.168.39.10"} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.114711   58186 main.go:141] libmachine: (embed-certs-706500) DBG | skip adding static IP to network mk-embed-certs-706500 - found existing host DHCP lease matching {name: "embed-certs-706500", mac: "52:54:00:36:c4:8c", ip: "192.168.39.10"}
	I0410 22:48:45.114721   58186 main.go:141] libmachine: (embed-certs-706500) Reserved static IP address: 192.168.39.10
	I0410 22:48:45.114728   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Getting to WaitForSSH function...
	I0410 22:48:45.114743   58186 main.go:141] libmachine: (embed-certs-706500) Waiting for SSH to be available...
	I0410 22:48:45.116708   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.116963   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.117007   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.117139   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Using SSH client type: external
	I0410 22:48:45.117167   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Using SSH private key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa (-rw-------)
	I0410 22:48:45.117198   58186 main.go:141] libmachine: (embed-certs-706500) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0410 22:48:45.117224   58186 main.go:141] libmachine: (embed-certs-706500) DBG | About to run SSH command:
	I0410 22:48:45.117236   58186 main.go:141] libmachine: (embed-certs-706500) DBG | exit 0
	I0410 22:48:45.240518   58186 main.go:141] libmachine: (embed-certs-706500) DBG | SSH cmd err, output: <nil>: 
	I0410 22:48:45.240843   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetConfigRaw
	I0410 22:48:45.241532   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetIP
	I0410 22:48:45.243908   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.244293   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.244317   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.244576   58186 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/config.json ...
	I0410 22:48:45.244775   58186 machine.go:94] provisionDockerMachine start ...
	I0410 22:48:45.244799   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:45.245004   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.247248   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.247639   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.247665   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.247859   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:45.248039   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.248217   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.248375   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:45.248543   58186 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:45.248746   58186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0410 22:48:45.248766   58186 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 22:48:45.357146   58186 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0410 22:48:45.357177   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetMachineName
	I0410 22:48:45.357428   58186 buildroot.go:166] provisioning hostname "embed-certs-706500"
	I0410 22:48:45.357447   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetMachineName
	I0410 22:48:45.357624   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.360299   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.360700   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.360796   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.360838   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:45.361049   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.361183   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.361367   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:45.361537   58186 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:45.361702   58186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0410 22:48:45.361716   58186 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-706500 && echo "embed-certs-706500" | sudo tee /etc/hostname
	I0410 22:48:45.487121   58186 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-706500
	
	I0410 22:48:45.487160   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.490242   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.490597   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.490625   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.490805   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:45.491004   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.491204   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.491359   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:45.491576   58186 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:45.491792   58186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0410 22:48:45.491824   58186 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-706500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-706500/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-706500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:48:45.606186   58186 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:48:45.606212   58186 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:48:45.606246   58186 buildroot.go:174] setting up certificates
	I0410 22:48:45.606257   58186 provision.go:84] configureAuth start
	I0410 22:48:45.606269   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetMachineName
	I0410 22:48:45.606594   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetIP
	I0410 22:48:45.609459   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.609893   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.609932   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.610134   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.612631   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.612945   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.612979   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.613144   58186 provision.go:143] copyHostCerts
	I0410 22:48:45.613193   58186 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:48:45.613207   58186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:48:45.613262   58186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:48:45.613378   58186 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:48:45.613393   58186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:48:45.613427   58186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:48:45.613495   58186 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:48:45.613505   58186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:48:45.613529   58186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:48:45.613592   58186 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.embed-certs-706500 san=[127.0.0.1 192.168.39.10 embed-certs-706500 localhost minikube]
	I0410 22:48:45.737049   58186 provision.go:177] copyRemoteCerts
	I0410 22:48:45.737105   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:48:45.737129   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.739712   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.740060   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.740089   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.740347   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:45.740589   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.740763   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:45.740957   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:48:45.828677   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:48:45.854080   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0410 22:48:45.878704   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0410 22:48:45.902611   58186 provision.go:87] duration metric: took 296.343353ms to configureAuth
	I0410 22:48:45.902640   58186 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:48:45.902879   58186 config.go:182] Loaded profile config "embed-certs-706500": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:48:45.902962   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.905588   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.905950   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.905972   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.906165   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:45.906360   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.906473   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.906561   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:45.906725   58186 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:45.906887   58186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0410 22:48:45.906911   58186 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:48:46.172772   58186 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:48:46.172807   58186 machine.go:97] duration metric: took 928.014662ms to provisionDockerMachine
	I0410 22:48:46.172823   58186 start.go:293] postStartSetup for "embed-certs-706500" (driver="kvm2")
	I0410 22:48:46.172836   58186 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:48:46.172877   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:46.173197   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:48:46.173223   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:46.176113   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.176465   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:46.176495   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.176679   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:46.176896   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:46.177118   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:46.177328   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:48:46.260470   58186 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:48:46.265003   58186 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:48:46.265030   58186 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:48:46.265088   58186 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:48:46.265158   58186 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:48:46.265241   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:48:46.274931   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:48:46.300036   58186 start.go:296] duration metric: took 127.199834ms for postStartSetup
	I0410 22:48:46.300082   58186 fix.go:56] duration metric: took 19.322550114s for fixHost
	I0410 22:48:46.300108   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:46.302945   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.303252   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:46.303279   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.303479   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:46.303700   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:46.303861   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:46.303990   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:46.304140   58186 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:46.304308   58186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0410 22:48:46.304318   58186 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 22:48:46.409294   58186 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712789326.385898055
	
	I0410 22:48:46.409317   58186 fix.go:216] guest clock: 1712789326.385898055
	I0410 22:48:46.409327   58186 fix.go:229] Guest: 2024-04-10 22:48:46.385898055 +0000 UTC Remote: 2024-04-10 22:48:46.300087658 +0000 UTC m=+229.287947250 (delta=85.810397ms)
	I0410 22:48:46.409352   58186 fix.go:200] guest clock delta is within tolerance: 85.810397ms
	I0410 22:48:46.409360   58186 start.go:83] releasing machines lock for "embed-certs-706500", held for 19.431860062s
	I0410 22:48:46.409389   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:46.409752   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetIP
	I0410 22:48:46.412201   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.412616   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:46.412651   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.412790   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:46.413361   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:46.413559   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:46.413617   58186 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:48:46.413665   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:46.413796   58186 ssh_runner.go:195] Run: cat /version.json
	I0410 22:48:46.413831   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:46.416879   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.417224   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:46.417248   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.417268   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.417428   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:46.417630   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:46.417811   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:46.417835   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:46.417858   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.417938   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:48:46.418030   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:46.418154   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:46.418284   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:46.418463   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:48:46.529204   58186 ssh_runner.go:195] Run: systemctl --version
	I0410 22:48:46.535396   58186 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:48:46.681100   58186 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 22:48:46.687278   58186 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:48:46.687340   58186 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:48:46.703105   58186 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0410 22:48:46.703128   58186 start.go:494] detecting cgroup driver to use...
	I0410 22:48:46.703191   58186 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:48:46.719207   58186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:48:46.733444   58186 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:48:46.733509   58186 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:48:46.747369   58186 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:48:46.762231   58186 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:48:46.874897   58186 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:48:47.023672   58186 docker.go:233] disabling docker service ...
	I0410 22:48:47.023749   58186 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:48:47.038963   58186 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:48:47.053827   58186 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:48:46.435268   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Start
	I0410 22:48:46.435498   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Ensuring networks are active...
	I0410 22:48:46.436266   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Ensuring network default is active
	I0410 22:48:46.436691   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Ensuring network mk-default-k8s-diff-port-519831 is active
	I0410 22:48:46.437163   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Getting domain xml...
	I0410 22:48:46.437799   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Creating domain...
	I0410 22:48:47.206641   58186 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:48:47.363331   58186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:48:47.380657   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:48:47.402234   58186 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0410 22:48:47.402306   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.419356   58186 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:48:47.419417   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.435320   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.450812   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.462588   58186 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:48:47.474323   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.494156   58186 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.515195   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.526148   58186 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:48:47.536045   58186 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0410 22:48:47.536106   58186 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0410 22:48:47.549556   58186 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:48:47.567236   58186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:48:47.702628   58186 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:48:47.848908   58186 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 22:48:47.849000   58186 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 22:48:47.854126   58186 start.go:562] Will wait 60s for crictl version
	I0410 22:48:47.854191   58186 ssh_runner.go:195] Run: which crictl
	I0410 22:48:47.858095   58186 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 22:48:47.897714   58186 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 22:48:47.897805   58186 ssh_runner.go:195] Run: crio --version
	I0410 22:48:47.927597   58186 ssh_runner.go:195] Run: crio --version
	I0410 22:48:47.958357   58186 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0410 22:48:45.584769   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:46.085396   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:46.585857   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:47.085186   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:47.585668   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:48.085585   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:48.585617   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:49.085227   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:49.585626   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:50.084900   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:47.959811   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetIP
	I0410 22:48:47.962805   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:47.963246   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:47.963276   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:47.963510   58186 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0410 22:48:47.967753   58186 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:48:47.981154   58186 kubeadm.go:877] updating cluster {Name:embed-certs-706500 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-706500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 22:48:47.981258   58186 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 22:48:47.981298   58186 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:48:48.018208   58186 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0410 22:48:48.018274   58186 ssh_runner.go:195] Run: which lz4
	I0410 22:48:48.023613   58186 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0410 22:48:48.029036   58186 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0410 22:48:48.029063   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0410 22:48:49.637729   58186 crio.go:462] duration metric: took 1.61414003s to copy over tarball
	I0410 22:48:49.637796   58186 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0410 22:48:52.046454   58186 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.408634496s)
	I0410 22:48:52.046482   58186 crio.go:469] duration metric: took 2.408728343s to extract the tarball
	I0410 22:48:52.046489   58186 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0410 22:48:47.701355   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting to get IP...
	I0410 22:48:47.702406   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:47.702994   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:47.703067   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:47.702962   59362 retry.go:31] will retry after 292.834608ms: waiting for machine to come up
	I0410 22:48:47.997294   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:47.997757   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:47.997785   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:47.997701   59362 retry.go:31] will retry after 341.35168ms: waiting for machine to come up
	I0410 22:48:48.340842   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:48.341347   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:48.341379   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:48.341279   59362 retry.go:31] will retry after 438.041848ms: waiting for machine to come up
	I0410 22:48:48.780565   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:48.781092   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:48.781116   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:48.781038   59362 retry.go:31] will retry after 557.770882ms: waiting for machine to come up
	I0410 22:48:49.340858   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:49.341330   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:49.341354   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:49.341282   59362 retry.go:31] will retry after 637.316206ms: waiting for machine to come up
	I0410 22:48:49.980256   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:49.980737   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:49.980761   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:49.980696   59362 retry.go:31] will retry after 909.873955ms: waiting for machine to come up
	I0410 22:48:50.891776   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:50.892197   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:50.892229   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:50.892147   59362 retry.go:31] will retry after 745.06949ms: waiting for machine to come up
	I0410 22:48:51.638436   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:51.638907   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:51.638933   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:51.638854   59362 retry.go:31] will retry after 1.060037191s: waiting for machine to come up
	I0410 22:48:50.585691   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:51.085669   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:51.585308   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:52.085393   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:52.585619   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:53.085643   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:53.585076   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:54.085251   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:54.585027   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:55.085629   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:52.087135   58186 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:48:52.139368   58186 crio.go:514] all images are preloaded for cri-o runtime.
	I0410 22:48:52.139389   58186 cache_images.go:84] Images are preloaded, skipping loading
	I0410 22:48:52.139397   58186 kubeadm.go:928] updating node { 192.168.39.10 8443 v1.29.3 crio true true} ...
	I0410 22:48:52.139535   58186 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-706500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-706500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 22:48:52.139629   58186 ssh_runner.go:195] Run: crio config
	I0410 22:48:52.193347   58186 cni.go:84] Creating CNI manager for ""
	I0410 22:48:52.193375   58186 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:48:52.193390   58186 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 22:48:52.193429   58186 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-706500 NodeName:embed-certs-706500 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0410 22:48:52.193606   58186 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-706500"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 22:48:52.193686   58186 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0410 22:48:52.206450   58186 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 22:48:52.206507   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 22:48:52.218898   58186 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0410 22:48:52.239285   58186 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0410 22:48:52.257083   58186 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0410 22:48:52.275448   58186 ssh_runner.go:195] Run: grep 192.168.39.10	control-plane.minikube.internal$ /etc/hosts
	I0410 22:48:52.279486   58186 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:48:52.293308   58186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:48:52.428424   58186 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:48:52.446713   58186 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500 for IP: 192.168.39.10
	I0410 22:48:52.446738   58186 certs.go:194] generating shared ca certs ...
	I0410 22:48:52.446759   58186 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:48:52.446937   58186 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 22:48:52.446980   58186 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 22:48:52.446990   58186 certs.go:256] generating profile certs ...
	I0410 22:48:52.447059   58186 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/client.key
	I0410 22:48:52.447124   58186 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/apiserver.key.f3045f1a
	I0410 22:48:52.447156   58186 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/proxy-client.key
	I0410 22:48:52.447294   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 22:48:52.447328   58186 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 22:48:52.447335   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 22:48:52.447354   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 22:48:52.447374   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 22:48:52.447405   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 22:48:52.447457   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:48:52.448166   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 22:48:52.481862   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 22:48:52.530983   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 22:48:52.572191   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 22:48:52.614466   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0410 22:48:52.644331   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0410 22:48:52.672811   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 22:48:52.698376   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0410 22:48:52.723998   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 22:48:52.749405   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 22:48:52.777529   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 22:48:52.803663   58186 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 22:48:52.822234   58186 ssh_runner.go:195] Run: openssl version
	I0410 22:48:52.830835   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 22:48:52.843425   58186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 22:48:52.848384   58186 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:48:52.848444   58186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 22:48:52.854869   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 22:48:52.867228   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 22:48:52.879319   58186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 22:48:52.884241   58186 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:48:52.884324   58186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 22:48:52.890349   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 22:48:52.902398   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 22:48:52.913996   58186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:52.918757   58186 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:52.918824   58186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:52.924669   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 22:48:52.936581   58186 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:48:52.941242   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0410 22:48:52.947526   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0410 22:48:52.953939   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0410 22:48:52.960447   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0410 22:48:52.966829   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0410 22:48:52.973148   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0410 22:48:52.979557   58186 kubeadm.go:391] StartCluster: {Name:embed-certs-706500 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-706500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:48:52.979669   58186 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 22:48:52.979744   58186 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:48:53.018394   58186 cri.go:89] found id: ""
	I0410 22:48:53.018479   58186 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0410 22:48:53.030088   58186 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0410 22:48:53.030112   58186 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0410 22:48:53.030118   58186 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0410 22:48:53.030184   58186 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0410 22:48:53.041035   58186 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0410 22:48:53.042312   58186 kubeconfig.go:125] found "embed-certs-706500" server: "https://192.168.39.10:8443"
	I0410 22:48:53.044306   58186 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0410 22:48:53.054911   58186 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.10
	I0410 22:48:53.054948   58186 kubeadm.go:1154] stopping kube-system containers ...
	I0410 22:48:53.054974   58186 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0410 22:48:53.055020   58186 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:48:53.093035   58186 cri.go:89] found id: ""
	I0410 22:48:53.093109   58186 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0410 22:48:53.111257   58186 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:48:53.122098   58186 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:48:53.122125   58186 kubeadm.go:156] found existing configuration files:
	
	I0410 22:48:53.122176   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:48:53.133513   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:48:53.133587   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:48:53.144275   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:48:53.154921   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:48:53.155000   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:48:53.165604   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:48:53.175520   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:48:53.175582   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:48:53.186094   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:48:53.196086   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:48:53.196156   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:48:53.206564   58186 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:48:53.217180   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:53.336883   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:54.151708   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:54.367165   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:54.457694   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:54.572579   58186 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:48:54.572693   58186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:55.073196   58186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:55.572865   58186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:55.595374   58186 api_server.go:72] duration metric: took 1.022777759s to wait for apiserver process to appear ...
	I0410 22:48:55.595403   58186 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:48:55.595424   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:48:52.701137   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:52.701574   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:52.701606   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:52.701529   59362 retry.go:31] will retry after 1.792719263s: waiting for machine to come up
	I0410 22:48:54.496380   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:54.496793   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:54.496823   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:54.496740   59362 retry.go:31] will retry after 2.321115222s: waiting for machine to come up
	I0410 22:48:56.819654   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:56.820107   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:56.820140   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:56.820072   59362 retry.go:31] will retry after 2.57309135s: waiting for machine to come up
	I0410 22:48:55.585506   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:56.084892   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:56.585876   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:57.085775   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:57.585260   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:58.085635   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:58.585588   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:59.085661   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:59.585663   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:00.085635   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:58.843447   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0410 22:48:58.843487   58186 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0410 22:48:58.843504   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:48:58.962381   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:48:58.962431   58186 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:48:59.095611   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:48:59.100754   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:48:59.100781   58186 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:48:59.595968   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:48:59.606936   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:48:59.606977   58186 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:49:00.096182   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:49:00.106346   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:49:00.106388   58186 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:49:00.595923   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:49:00.600197   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I0410 22:49:00.609220   58186 api_server.go:141] control plane version: v1.29.3
	I0410 22:49:00.609246   58186 api_server.go:131] duration metric: took 5.013835577s to wait for apiserver health ...
	I0410 22:49:00.609256   58186 cni.go:84] Creating CNI manager for ""
	I0410 22:49:00.609263   58186 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:49:00.611220   58186 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0410 22:49:00.612765   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0410 22:49:00.625567   58186 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0410 22:49:00.648581   58186 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:49:00.657652   58186 system_pods.go:59] 8 kube-system pods found
	I0410 22:49:00.657688   58186 system_pods.go:61] "coredns-76f75df574-j4kj8" [1986e6b6-e6c7-4212-bdd5-10360a0b897c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0410 22:49:00.657696   58186 system_pods.go:61] "etcd-embed-certs-706500" [acbf9245-d4f8-4fa6-88a7-4f891f9f8403] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0410 22:49:00.657704   58186 system_pods.go:61] "kube-apiserver-embed-certs-706500" [b9c79d1d-f571-4ed8-a68f-512e8a2a1705] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0410 22:49:00.657709   58186 system_pods.go:61] "kube-controller-manager-embed-certs-706500" [d229b85d-9a8d-4cd0-ac48-a6aea3769581] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0410 22:49:00.657715   58186 system_pods.go:61] "kube-proxy-8kzff" [ce35a33f-1697-44a7-ad64-83895236bc6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0410 22:49:00.657720   58186 system_pods.go:61] "kube-scheduler-embed-certs-706500" [72c68a6c-beba-48a5-937b-51c40aab0386] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0410 22:49:00.657726   58186 system_pods.go:61] "metrics-server-57f55c9bc5-4r9pl" [40a91fc1-9e0a-4bcc-a2e9-65e9f2d2b960] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:49:00.657733   58186 system_pods.go:61] "storage-provisioner" [10f7637e-e6e0-4f04-b1eb-ac3bd205064f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0410 22:49:00.657742   58186 system_pods.go:74] duration metric: took 9.141859ms to wait for pod list to return data ...
	I0410 22:49:00.657752   58186 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:49:00.662255   58186 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:49:00.662300   58186 node_conditions.go:123] node cpu capacity is 2
	I0410 22:49:00.662315   58186 node_conditions.go:105] duration metric: took 4.553643ms to run NodePressure ...
	I0410 22:49:00.662338   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:00.957923   58186 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0410 22:49:00.962553   58186 kubeadm.go:733] kubelet initialised
	I0410 22:49:00.962575   58186 kubeadm.go:734] duration metric: took 4.616848ms waiting for restarted kubelet to initialise ...
	I0410 22:49:00.962585   58186 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:49:00.968387   58186 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-j4kj8" in "kube-system" namespace to be "Ready" ...
	I0410 22:48:59.395416   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:59.395864   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:59.395893   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:59.395819   59362 retry.go:31] will retry after 2.378137008s: waiting for machine to come up
	I0410 22:49:01.776037   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:01.776587   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:49:01.776641   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:49:01.776526   59362 retry.go:31] will retry after 4.360839049s: waiting for machine to come up
	I0410 22:49:00.585234   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:01.084884   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:01.585066   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:02.085697   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:02.585573   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:03.085552   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:03.585521   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:04.084919   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:04.584802   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:05.085266   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:02.975009   58186 pod_ready.go:102] pod "coredns-76f75df574-j4kj8" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:04.976854   58186 pod_ready.go:102] pod "coredns-76f75df574-j4kj8" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:06.141509   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.142008   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Found IP for machine: 192.168.72.170
	I0410 22:49:06.142037   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has current primary IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.142047   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Reserving static IP address...
	I0410 22:49:06.142422   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Reserved static IP address: 192.168.72.170
	I0410 22:49:06.142451   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for SSH to be available...
	I0410 22:49:06.142476   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-519831", mac: "52:54:00:dc:67:d5", ip: "192.168.72.170"} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.142499   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | skip adding static IP to network mk-default-k8s-diff-port-519831 - found existing host DHCP lease matching {name: "default-k8s-diff-port-519831", mac: "52:54:00:dc:67:d5", ip: "192.168.72.170"}
	I0410 22:49:06.142518   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Getting to WaitForSSH function...
	I0410 22:49:06.144878   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.145206   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.145238   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.145326   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Using SSH client type: external
	I0410 22:49:06.145365   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Using SSH private key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa (-rw-------)
	I0410 22:49:06.145401   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.170 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0410 22:49:06.145421   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | About to run SSH command:
	I0410 22:49:06.145438   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | exit 0
	I0410 22:49:06.272546   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | SSH cmd err, output: <nil>: 
	I0410 22:49:06.272919   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetConfigRaw
	I0410 22:49:06.273605   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetIP
	I0410 22:49:06.276234   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.276610   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.276644   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.276851   58701 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/config.json ...
	I0410 22:49:06.277100   58701 machine.go:94] provisionDockerMachine start ...
	I0410 22:49:06.277127   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:06.277400   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:06.279729   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.280107   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.280146   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.280295   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:06.280480   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.280658   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.280794   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:06.280939   58701 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:06.281121   58701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0410 22:49:06.281138   58701 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 22:49:06.385219   58701 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0410 22:49:06.385254   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetMachineName
	I0410 22:49:06.385498   58701 buildroot.go:166] provisioning hostname "default-k8s-diff-port-519831"
	I0410 22:49:06.385527   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetMachineName
	I0410 22:49:06.385716   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:06.388422   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.388922   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.388963   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.389072   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:06.389292   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.389462   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.389600   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:06.389751   58701 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:06.389924   58701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0410 22:49:06.389938   58701 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-519831 && echo "default-k8s-diff-port-519831" | sudo tee /etc/hostname
	I0410 22:49:06.507221   58701 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-519831
	
	I0410 22:49:06.507252   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:06.509837   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.510179   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.510225   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.510385   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:06.510561   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.510736   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.510880   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:06.511040   58701 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:06.511236   58701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0410 22:49:06.511262   58701 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-519831' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-519831/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-519831' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:49:06.626097   58701 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:49:06.626129   58701 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:49:06.626153   58701 buildroot.go:174] setting up certificates
	I0410 22:49:06.626163   58701 provision.go:84] configureAuth start
	I0410 22:49:06.626173   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetMachineName
	I0410 22:49:06.626499   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetIP
	I0410 22:49:06.629067   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.629412   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.629450   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.629559   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:06.632132   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.632517   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.632548   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.632674   58701 provision.go:143] copyHostCerts
	I0410 22:49:06.632734   58701 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:49:06.632755   58701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:49:06.632822   58701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:49:06.633021   58701 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:49:06.633037   58701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:49:06.633078   58701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:49:06.633179   58701 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:49:06.633191   58701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:49:06.633223   58701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:49:06.633295   58701 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-519831 san=[127.0.0.1 192.168.72.170 default-k8s-diff-port-519831 localhost minikube]
	I0410 22:49:06.835016   58701 provision.go:177] copyRemoteCerts
	I0410 22:49:06.835077   58701 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:49:06.835104   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:06.837769   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.838124   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.838152   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.838327   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:06.838519   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.838669   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:06.838808   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:06.921929   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:49:06.947855   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0410 22:49:06.972865   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0410 22:49:06.999630   58701 provision.go:87] duration metric: took 373.45654ms to configureAuth
	I0410 22:49:06.999658   58701 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:49:06.999872   58701 config.go:182] Loaded profile config "default-k8s-diff-port-519831": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:49:06.999942   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:07.003015   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.003418   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.003452   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.003623   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:07.003793   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.003946   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.004062   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:07.004208   58701 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:07.004425   58701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0410 22:49:07.004448   58701 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:49:07.273568   58701 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:49:07.273601   58701 machine.go:97] duration metric: took 996.483382ms to provisionDockerMachine
	I0410 22:49:07.273618   58701 start.go:293] postStartSetup for "default-k8s-diff-port-519831" (driver="kvm2")
	I0410 22:49:07.273634   58701 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:49:07.273660   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:07.274009   58701 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:49:07.274040   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:07.276736   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.277132   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.277155   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.277354   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:07.277537   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.277740   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:07.277891   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:07.361056   58701 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:49:07.365729   58701 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:49:07.365759   58701 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:49:07.365834   58701 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:49:07.365935   58701 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:49:07.366064   58701 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:49:07.376754   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:49:07.509384   57270 start.go:364] duration metric: took 56.035567079s to acquireMachinesLock for "no-preload-646133"
	I0410 22:49:07.509424   57270 start.go:96] Skipping create...Using existing machine configuration
	I0410 22:49:07.509432   57270 fix.go:54] fixHost starting: 
	I0410 22:49:07.509837   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:07.509872   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:07.526882   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46721
	I0410 22:49:07.527337   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:07.527780   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:49:07.527801   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:07.528077   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:07.528238   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:07.528366   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetState
	I0410 22:49:07.529732   57270 fix.go:112] recreateIfNeeded on no-preload-646133: state=Stopped err=<nil>
	I0410 22:49:07.529755   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	W0410 22:49:07.529878   57270 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 22:49:07.531875   57270 out.go:177] * Restarting existing kvm2 VM for "no-preload-646133" ...
	I0410 22:49:07.402691   58701 start.go:296] duration metric: took 129.059293ms for postStartSetup
	I0410 22:49:07.402731   58701 fix.go:56] duration metric: took 20.99318672s for fixHost
	I0410 22:49:07.402751   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:07.405634   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.405955   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.405996   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.406161   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:07.406378   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.406537   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.406647   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:07.406826   58701 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:07.407062   58701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0410 22:49:07.407079   58701 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 22:49:07.509210   58701 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712789347.471050157
	
	I0410 22:49:07.509233   58701 fix.go:216] guest clock: 1712789347.471050157
	I0410 22:49:07.509241   58701 fix.go:229] Guest: 2024-04-10 22:49:07.471050157 +0000 UTC Remote: 2024-04-10 22:49:07.402735415 +0000 UTC m=+140.054227768 (delta=68.314742ms)
	I0410 22:49:07.509287   58701 fix.go:200] guest clock delta is within tolerance: 68.314742ms
	I0410 22:49:07.509297   58701 start.go:83] releasing machines lock for "default-k8s-diff-port-519831", held for 21.099785205s
	I0410 22:49:07.509328   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:07.509613   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetIP
	I0410 22:49:07.512255   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.512634   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.512667   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.512827   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:07.513364   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:07.513531   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:07.513610   58701 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:49:07.513649   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:07.513750   58701 ssh_runner.go:195] Run: cat /version.json
	I0410 22:49:07.513771   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:07.516338   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.516685   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.516776   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.516802   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.516951   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:07.517142   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.517161   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.517173   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.517310   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:07.517355   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:07.517470   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.517602   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:07.517604   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:07.517765   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:07.594218   58701 ssh_runner.go:195] Run: systemctl --version
	I0410 22:49:07.633783   58701 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:49:07.790430   58701 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 22:49:07.797279   58701 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:49:07.797358   58701 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:49:07.815457   58701 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0410 22:49:07.815488   58701 start.go:494] detecting cgroup driver to use...
	I0410 22:49:07.815561   58701 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:49:07.833038   58701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:49:07.848577   58701 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:49:07.848648   58701 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:49:07.863609   58701 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:49:07.878299   58701 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:49:07.999388   58701 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:49:08.155534   58701 docker.go:233] disabling docker service ...
	I0410 22:49:08.155613   58701 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:49:08.175545   58701 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:49:08.195923   58701 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:49:08.340282   58701 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:49:08.485647   58701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:49:08.500245   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:49:08.520493   58701 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0410 22:49:08.520582   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.535455   58701 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:49:08.535521   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.547058   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.559638   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.571374   58701 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:49:08.583796   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.598091   58701 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.622634   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.633858   58701 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:49:08.645114   58701 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0410 22:49:08.645167   58701 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0410 22:49:08.660204   58701 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:49:08.671345   58701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:49:08.804523   58701 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:49:08.953644   58701 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 22:49:08.953717   58701 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 22:49:08.958661   58701 start.go:562] Will wait 60s for crictl version
	I0410 22:49:08.958715   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:49:08.962938   58701 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 22:49:09.006335   58701 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 22:49:09.006425   58701 ssh_runner.go:195] Run: crio --version
	I0410 22:49:09.037315   58701 ssh_runner.go:195] Run: crio --version
	I0410 22:49:09.069366   58701 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0410 22:49:07.533174   57270 main.go:141] libmachine: (no-preload-646133) Calling .Start
	I0410 22:49:07.533352   57270 main.go:141] libmachine: (no-preload-646133) Ensuring networks are active...
	I0410 22:49:07.534117   57270 main.go:141] libmachine: (no-preload-646133) Ensuring network default is active
	I0410 22:49:07.534413   57270 main.go:141] libmachine: (no-preload-646133) Ensuring network mk-no-preload-646133 is active
	I0410 22:49:07.534851   57270 main.go:141] libmachine: (no-preload-646133) Getting domain xml...
	I0410 22:49:07.535553   57270 main.go:141] libmachine: (no-preload-646133) Creating domain...
	I0410 22:49:08.844990   57270 main.go:141] libmachine: (no-preload-646133) Waiting to get IP...
	I0410 22:49:08.845908   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:08.846363   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:08.846459   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:08.846332   59513 retry.go:31] will retry after 241.150391ms: waiting for machine to come up
	I0410 22:49:09.088961   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:09.089455   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:09.089489   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:09.089417   59513 retry.go:31] will retry after 349.96397ms: waiting for machine to come up
	I0410 22:49:09.441226   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:09.441799   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:09.441828   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:09.441754   59513 retry.go:31] will retry after 444.576999ms: waiting for machine to come up
	I0410 22:49:05.585408   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:06.085250   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:06.585503   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:07.085422   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:07.584909   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:08.084863   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:08.585859   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:09.085175   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:09.585660   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:10.085221   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:07.475385   58186 pod_ready.go:92] pod "coredns-76f75df574-j4kj8" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:07.475414   58186 pod_ready.go:81] duration metric: took 6.506993581s for pod "coredns-76f75df574-j4kj8" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:07.475424   58186 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:09.486133   58186 pod_ready.go:102] pod "etcd-embed-certs-706500" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:11.483972   58186 pod_ready.go:92] pod "etcd-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:11.483994   58186 pod_ready.go:81] duration metric: took 4.008564427s for pod "etcd-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.484005   58186 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.490340   58186 pod_ready.go:92] pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:11.490380   58186 pod_ready.go:81] duration metric: took 6.362017ms for pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.490399   58186 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.497078   58186 pod_ready.go:92] pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:11.497110   58186 pod_ready.go:81] duration metric: took 6.701645ms for pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.497124   58186 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8kzff" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.504091   58186 pod_ready.go:92] pod "kube-proxy-8kzff" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:11.504118   58186 pod_ready.go:81] duration metric: took 6.985136ms for pod "kube-proxy-8kzff" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.504132   58186 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.510619   58186 pod_ready.go:92] pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:11.510656   58186 pod_ready.go:81] duration metric: took 6.513031ms for pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.510674   58186 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:09.070592   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetIP
	I0410 22:49:09.073850   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:09.074163   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:09.074190   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:09.074388   58701 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0410 22:49:09.079170   58701 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:49:09.093764   58701 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-519831 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-519831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.170 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 22:49:09.093973   58701 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 22:49:09.094040   58701 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:49:09.140874   58701 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0410 22:49:09.140951   58701 ssh_runner.go:195] Run: which lz4
	I0410 22:49:09.146775   58701 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0410 22:49:09.152876   58701 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0410 22:49:09.152917   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0410 22:49:10.827934   58701 crio.go:462] duration metric: took 1.681191787s to copy over tarball
	I0410 22:49:10.828019   58701 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0410 22:49:09.888688   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:09.892576   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:09.892607   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:09.889179   59513 retry.go:31] will retry after 560.585608ms: waiting for machine to come up
	I0410 22:49:10.451001   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:10.451630   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:10.451663   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:10.451590   59513 retry.go:31] will retry after 601.519186ms: waiting for machine to come up
	I0410 22:49:11.054324   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:11.054664   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:11.054693   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:11.054653   59513 retry.go:31] will retry after 750.183717ms: waiting for machine to come up
	I0410 22:49:11.805908   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:11.806303   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:11.806331   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:11.806254   59513 retry.go:31] will retry after 883.805148ms: waiting for machine to come up
	I0410 22:49:12.691316   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:12.691861   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:12.691893   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:12.691804   59513 retry.go:31] will retry after 1.39605629s: waiting for machine to come up
	I0410 22:49:14.090350   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:14.090795   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:14.090821   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:14.090753   59513 retry.go:31] will retry after 1.388324423s: waiting for machine to come up
	I0410 22:49:10.585333   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:11.085466   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:11.585062   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:12.085191   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:12.585644   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:13.085615   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:13.585355   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:14.085270   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:14.584868   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:15.085639   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:13.521844   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:16.041569   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:13.328492   58701 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.500439721s)
	I0410 22:49:13.328534   58701 crio.go:469] duration metric: took 2.500564923s to extract the tarball
	I0410 22:49:13.328545   58701 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0410 22:49:13.367568   58701 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:49:13.415759   58701 crio.go:514] all images are preloaded for cri-o runtime.
	I0410 22:49:13.415780   58701 cache_images.go:84] Images are preloaded, skipping loading
	I0410 22:49:13.415788   58701 kubeadm.go:928] updating node { 192.168.72.170 8444 v1.29.3 crio true true} ...
	I0410 22:49:13.415899   58701 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-519831 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-519831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 22:49:13.415982   58701 ssh_runner.go:195] Run: crio config
	I0410 22:49:13.473019   58701 cni.go:84] Creating CNI manager for ""
	I0410 22:49:13.473046   58701 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:49:13.473063   58701 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 22:49:13.473100   58701 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.170 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-519831 NodeName:default-k8s-diff-port-519831 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0410 22:49:13.473261   58701 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.170
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-519831"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.170
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.170"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 22:49:13.473325   58701 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0410 22:49:13.487302   58701 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 22:49:13.487368   58701 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 22:49:13.498496   58701 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0410 22:49:13.518312   58701 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0410 22:49:13.537972   58701 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0410 22:49:13.558714   58701 ssh_runner.go:195] Run: grep 192.168.72.170	control-plane.minikube.internal$ /etc/hosts
	I0410 22:49:13.562886   58701 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.170	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:49:13.575957   58701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:49:13.706316   58701 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:49:13.725898   58701 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831 for IP: 192.168.72.170
	I0410 22:49:13.725924   58701 certs.go:194] generating shared ca certs ...
	I0410 22:49:13.725944   58701 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:49:13.726119   58701 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 22:49:13.726173   58701 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 22:49:13.726185   58701 certs.go:256] generating profile certs ...
	I0410 22:49:13.726297   58701 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/client.key
	I0410 22:49:13.726398   58701 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/apiserver.key.ff579077
	I0410 22:49:13.726454   58701 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/proxy-client.key
	I0410 22:49:13.726606   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 22:49:13.726644   58701 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 22:49:13.726656   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 22:49:13.726685   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 22:49:13.726725   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 22:49:13.726756   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 22:49:13.726811   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:49:13.727747   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 22:49:13.780060   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 22:49:13.818446   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 22:49:13.865986   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 22:49:13.897578   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0410 22:49:13.937123   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0410 22:49:13.970558   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 22:49:13.997678   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0410 22:49:14.025173   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 22:49:14.051190   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 22:49:14.079109   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 22:49:14.107547   58701 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 22:49:14.128029   58701 ssh_runner.go:195] Run: openssl version
	I0410 22:49:14.134686   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 22:49:14.148733   58701 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:14.154057   58701 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:14.154114   58701 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:14.160626   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 22:49:14.174406   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 22:49:14.187513   58701 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 22:49:14.193279   58701 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:49:14.193344   58701 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 22:49:14.199518   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 22:49:14.213538   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 22:49:14.225618   58701 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 22:49:14.230610   58701 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:49:14.230666   58701 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 22:49:14.236756   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 22:49:14.250041   58701 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:49:14.255320   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0410 22:49:14.262821   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0410 22:49:14.268854   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0410 22:49:14.275152   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0410 22:49:14.281598   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0410 22:49:14.287895   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0410 22:49:14.294125   58701 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-519831 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-519831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.170 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:49:14.294246   58701 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 22:49:14.294301   58701 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:49:14.332192   58701 cri.go:89] found id: ""
	I0410 22:49:14.332268   58701 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0410 22:49:14.343174   58701 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0410 22:49:14.343198   58701 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0410 22:49:14.343205   58701 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0410 22:49:14.343261   58701 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0410 22:49:14.355648   58701 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0410 22:49:14.357310   58701 kubeconfig.go:125] found "default-k8s-diff-port-519831" server: "https://192.168.72.170:8444"
	I0410 22:49:14.360713   58701 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0410 22:49:14.371972   58701 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.170
	I0410 22:49:14.372011   58701 kubeadm.go:1154] stopping kube-system containers ...
	I0410 22:49:14.372025   58701 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0410 22:49:14.372083   58701 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:49:14.410517   58701 cri.go:89] found id: ""
	I0410 22:49:14.410594   58701 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0410 22:49:14.428686   58701 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:49:14.443256   58701 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:49:14.443281   58701 kubeadm.go:156] found existing configuration files:
	
	I0410 22:49:14.443353   58701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0410 22:49:14.455086   58701 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:49:14.455156   58701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:49:14.466151   58701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0410 22:49:14.476799   58701 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:49:14.476852   58701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:49:14.487588   58701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0410 22:49:14.498476   58701 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:49:14.498534   58701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:49:14.509248   58701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0410 22:49:14.520223   58701 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:49:14.520287   58701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:49:14.531388   58701 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:49:14.542775   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:14.673733   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:15.773338   58701 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.099570437s)
	I0410 22:49:15.773385   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:15.985355   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:16.052996   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:16.126251   58701 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:49:16.126362   58701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:16.626615   58701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:17.127289   58701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:17.166269   58701 api_server.go:72] duration metric: took 1.040013076s to wait for apiserver process to appear ...
	I0410 22:49:17.166315   58701 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:49:17.166339   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:17.166964   58701 api_server.go:269] stopped: https://192.168.72.170:8444/healthz: Get "https://192.168.72.170:8444/healthz": dial tcp 192.168.72.170:8444: connect: connection refused
	I0410 22:49:15.480947   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:15.481358   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:15.481386   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:15.481309   59513 retry.go:31] will retry after 2.276682979s: waiting for machine to come up
	I0410 22:49:17.759404   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:17.759931   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:17.759975   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:17.759887   59513 retry.go:31] will retry after 2.254373826s: waiting for machine to come up
	I0410 22:49:15.585476   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:16.085404   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:16.585123   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:17.085713   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:17.584877   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:18.085601   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:18.585222   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:19.084891   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:19.585215   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:20.085668   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:18.519156   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:20.520053   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:17.667248   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:20.709507   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0410 22:49:20.709538   58701 api_server.go:103] status: https://192.168.72.170:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0410 22:49:20.709554   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:20.740392   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:49:20.740483   58701 api_server.go:103] status: https://192.168.72.170:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:49:21.166658   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:21.174343   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:49:21.174378   58701 api_server.go:103] status: https://192.168.72.170:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:49:21.667345   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:21.685078   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:49:21.685112   58701 api_server.go:103] status: https://192.168.72.170:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:49:22.166644   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:22.171611   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 200:
	ok
	I0410 22:49:22.178452   58701 api_server.go:141] control plane version: v1.29.3
	I0410 22:49:22.178484   58701 api_server.go:131] duration metric: took 5.012161431s to wait for apiserver health ...
	I0410 22:49:22.178493   58701 cni.go:84] Creating CNI manager for ""
	I0410 22:49:22.178499   58701 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:49:22.180370   58701 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0410 22:49:22.181768   58701 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0410 22:49:22.197462   58701 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0410 22:49:22.218348   58701 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:49:22.236800   58701 system_pods.go:59] 8 kube-system pods found
	I0410 22:49:22.236830   58701 system_pods.go:61] "coredns-76f75df574-ghnvx" [88ebd9b0-ecf0-4037-b5b0-547dad2354ba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0410 22:49:22.236837   58701 system_pods.go:61] "etcd-default-k8s-diff-port-519831" [e03c3075-b377-41a8-8f68-5a424fafd6a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0410 22:49:22.236843   58701 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-519831" [a538137b-08ce-4feb-a420-2e3ad7125b14] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0410 22:49:22.236849   58701 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-519831" [05d154e2-69cf-40dc-a4ba-ed0d65be4365] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0410 22:49:22.236861   58701 system_pods.go:61] "kube-proxy-5mbwx" [44724487-9539-4079-9fd6-40cb70208b95] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0410 22:49:22.236866   58701 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-519831" [a587ef20-140c-40b0-b306-d5f5c595f4a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0410 22:49:22.236871   58701 system_pods.go:61] "metrics-server-57f55c9bc5-9l2hc" [2f5cda2f-4d8f-4798-954e-5ef588f2b88f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:49:22.236876   58701 system_pods.go:61] "storage-provisioner" [e4e09f42-54ba-480e-a020-1ca071a54558] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0410 22:49:22.236884   58701 system_pods.go:74] duration metric: took 18.510987ms to wait for pod list to return data ...
	I0410 22:49:22.236893   58701 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:49:22.242143   58701 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:49:22.242167   58701 node_conditions.go:123] node cpu capacity is 2
	I0410 22:49:22.242177   58701 node_conditions.go:105] duration metric: took 5.279415ms to run NodePressure ...
	I0410 22:49:22.242192   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:22.532741   58701 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0410 22:49:22.537418   58701 kubeadm.go:733] kubelet initialised
	I0410 22:49:22.537444   58701 kubeadm.go:734] duration metric: took 4.675489ms waiting for restarted kubelet to initialise ...
	I0410 22:49:22.537453   58701 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:49:22.543364   58701 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-ghnvx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:22.549161   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "coredns-76f75df574-ghnvx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.549186   58701 pod_ready.go:81] duration metric: took 5.796619ms for pod "coredns-76f75df574-ghnvx" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:22.549196   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "coredns-76f75df574-ghnvx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.549207   58701 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:22.554131   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.554156   58701 pod_ready.go:81] duration metric: took 4.941026ms for pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:22.554165   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.554172   58701 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:22.558783   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.558812   58701 pod_ready.go:81] duration metric: took 4.633262ms for pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:22.558822   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.558828   58701 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:22.622314   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.622344   58701 pod_ready.go:81] duration metric: took 63.505681ms for pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:22.622356   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.622370   58701 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5mbwx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:23.022239   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "kube-proxy-5mbwx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.022266   58701 pod_ready.go:81] duration metric: took 399.888837ms for pod "kube-proxy-5mbwx" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:23.022275   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "kube-proxy-5mbwx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.022286   58701 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:23.422213   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.422245   58701 pod_ready.go:81] duration metric: took 399.950443ms for pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:23.422257   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.422270   58701 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:23.823832   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.823858   58701 pod_ready.go:81] duration metric: took 401.581123ms for pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:23.823868   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.823875   58701 pod_ready.go:38] duration metric: took 1.286413141s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:49:23.823889   58701 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0410 22:49:23.840663   58701 ops.go:34] apiserver oom_adj: -16
	I0410 22:49:23.840691   58701 kubeadm.go:591] duration metric: took 9.497479077s to restartPrimaryControlPlane
	I0410 22:49:23.840702   58701 kubeadm.go:393] duration metric: took 9.546582608s to StartCluster
	I0410 22:49:23.840718   58701 settings.go:142] acquiring lock: {Name:mk5dc8e9a07a91433645b19ffba859d70c73be71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:49:23.840795   58701 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:49:23.843350   58701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/kubeconfig: {Name:mkd6f498bec10d7b0d2291f83f5e27766227bc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:49:23.843613   58701 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.170 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0410 22:49:23.845385   58701 out.go:177] * Verifying Kubernetes components...
	I0410 22:49:23.843685   58701 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0410 22:49:23.846686   58701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:49:23.845421   58701 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-519831"
	I0410 22:49:23.846834   58701 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-519831"
	I0410 22:49:23.843826   58701 config.go:182] Loaded profile config "default-k8s-diff-port-519831": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	W0410 22:49:23.846852   58701 addons.go:243] addon storage-provisioner should already be in state true
	I0410 22:49:23.846901   58701 host.go:66] Checking if "default-k8s-diff-port-519831" exists ...
	I0410 22:49:23.845429   58701 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-519831"
	I0410 22:49:23.846969   58701 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-519831"
	I0410 22:49:23.845433   58701 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-519831"
	I0410 22:49:23.847069   58701 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-519831"
	W0410 22:49:23.847088   58701 addons.go:243] addon metrics-server should already be in state true
	I0410 22:49:23.847122   58701 host.go:66] Checking if "default-k8s-diff-port-519831" exists ...
	I0410 22:49:23.847349   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.847358   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.847381   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.847384   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.847495   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.847532   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.863090   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33089
	I0410 22:49:23.863240   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33493
	I0410 22:49:23.863685   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.863793   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.864315   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.864333   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.864356   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.864371   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.864741   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.864749   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.864949   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetState
	I0410 22:49:23.865210   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.865258   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.867599   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41111
	I0410 22:49:23.868035   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.868627   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.868652   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.868739   58701 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-519831"
	W0410 22:49:23.868757   58701 addons.go:243] addon default-storageclass should already be in state true
	I0410 22:49:23.868785   58701 host.go:66] Checking if "default-k8s-diff-port-519831" exists ...
	I0410 22:49:23.869023   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.869094   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.869136   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.869562   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.869630   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.881589   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41925
	I0410 22:49:23.881997   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.882429   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.882442   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.882719   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.882914   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetState
	I0410 22:49:23.884708   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:23.886865   58701 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0410 22:49:23.886946   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37555
	I0410 22:49:23.888493   58701 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0410 22:49:23.888511   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0410 22:49:23.888532   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:23.888850   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.889129   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39287
	I0410 22:49:23.889513   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.889536   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.889601   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.890020   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.890265   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.890285   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.890308   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetState
	I0410 22:49:23.890667   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.891458   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.891496   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.892090   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.892232   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:23.894143   58701 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:20.015689   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:20.016192   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:20.016230   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:20.016163   59513 retry.go:31] will retry after 2.611766259s: waiting for machine to come up
	I0410 22:49:22.629270   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:22.629704   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:22.629731   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:22.629644   59513 retry.go:31] will retry after 3.270808972s: waiting for machine to come up
	I0410 22:49:23.892695   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:23.892720   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:23.895489   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.895599   58701 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:49:23.895609   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0410 22:49:23.895623   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:23.896367   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:23.896558   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:23.896754   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:23.898964   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.899320   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:23.899355   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.899535   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:23.899715   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:23.899855   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:23.899999   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:23.910046   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35713
	I0410 22:49:23.910471   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.911056   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.911077   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.911445   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.911653   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetState
	I0410 22:49:23.913330   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:23.913603   58701 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0410 22:49:23.913619   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0410 22:49:23.913637   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:23.916303   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.916759   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:23.916820   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.916923   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:23.917137   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:23.917377   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:23.917517   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:24.067636   58701 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:49:24.087396   58701 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-519831" to be "Ready" ...
	I0410 22:49:24.204429   58701 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0410 22:49:24.204457   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0410 22:49:24.213319   58701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0410 22:49:24.224083   58701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:49:24.234156   58701 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0410 22:49:24.234182   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0410 22:49:24.273950   58701 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:49:24.273980   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0410 22:49:24.295822   58701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:49:24.580460   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:24.580498   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:24.580835   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:24.580853   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:24.580864   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:24.580872   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:24.580872   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Closing plugin on server side
	I0410 22:49:24.581102   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:24.581126   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:24.589648   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:24.589714   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:24.589981   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Closing plugin on server side
	I0410 22:49:24.590040   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:24.590062   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:25.339438   58701 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.043578779s)
	I0410 22:49:25.339489   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:25.339499   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:25.339451   58701 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.115333809s)
	I0410 22:49:25.339560   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:25.339593   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:25.339872   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:25.339897   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:25.339911   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:25.339924   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:25.339944   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Closing plugin on server side
	I0410 22:49:25.339956   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:25.339984   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:25.340004   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:25.340015   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:25.340149   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:25.340185   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:25.340203   58701 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-519831"
	I0410 22:49:25.341481   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:25.341497   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:25.344575   58701 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0410 22:49:20.585629   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:21.084898   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:21.585346   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:22.085672   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:22.585768   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:23.085613   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:23.585507   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:24.085104   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:24.585745   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:25.084858   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:23.017917   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:25.018591   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:27.019206   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:25.341622   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Closing plugin on server side
	I0410 22:49:25.345974   58701 addons.go:505] duration metric: took 1.502302613s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0410 22:49:26.094458   58701 node_ready.go:53] node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:25.904062   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:25.904580   57270 main.go:141] libmachine: (no-preload-646133) Found IP for machine: 192.168.50.17
	I0410 22:49:25.904608   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has current primary IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:25.904622   57270 main.go:141] libmachine: (no-preload-646133) Reserving static IP address...
	I0410 22:49:25.905076   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "no-preload-646133", mac: "52:54:00:35:62:0e", ip: "192.168.50.17"} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:25.905117   57270 main.go:141] libmachine: (no-preload-646133) DBG | skip adding static IP to network mk-no-preload-646133 - found existing host DHCP lease matching {name: "no-preload-646133", mac: "52:54:00:35:62:0e", ip: "192.168.50.17"}
	I0410 22:49:25.905134   57270 main.go:141] libmachine: (no-preload-646133) Reserved static IP address: 192.168.50.17
	I0410 22:49:25.905151   57270 main.go:141] libmachine: (no-preload-646133) Waiting for SSH to be available...
	I0410 22:49:25.905170   57270 main.go:141] libmachine: (no-preload-646133) DBG | Getting to WaitForSSH function...
	I0410 22:49:25.907397   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:25.907773   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:25.907796   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:25.907937   57270 main.go:141] libmachine: (no-preload-646133) DBG | Using SSH client type: external
	I0410 22:49:25.907960   57270 main.go:141] libmachine: (no-preload-646133) DBG | Using SSH private key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa (-rw-------)
	I0410 22:49:25.907979   57270 main.go:141] libmachine: (no-preload-646133) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.17 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0410 22:49:25.907989   57270 main.go:141] libmachine: (no-preload-646133) DBG | About to run SSH command:
	I0410 22:49:25.907997   57270 main.go:141] libmachine: (no-preload-646133) DBG | exit 0
	I0410 22:49:26.032683   57270 main.go:141] libmachine: (no-preload-646133) DBG | SSH cmd err, output: <nil>: 
	I0410 22:49:26.033065   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetConfigRaw
	I0410 22:49:26.033761   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetIP
	I0410 22:49:26.036545   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.036951   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.036982   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.037187   57270 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/config.json ...
	I0410 22:49:26.037403   57270 machine.go:94] provisionDockerMachine start ...
	I0410 22:49:26.037424   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:26.037655   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.039750   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.040081   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.040102   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.040285   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:26.040486   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.040657   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.040818   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:26.040972   57270 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:26.041180   57270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0410 22:49:26.041197   57270 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 22:49:26.149298   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0410 22:49:26.149335   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetMachineName
	I0410 22:49:26.149618   57270 buildroot.go:166] provisioning hostname "no-preload-646133"
	I0410 22:49:26.149647   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetMachineName
	I0410 22:49:26.149849   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.152432   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.152799   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.152829   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.152973   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:26.153233   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.153406   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.153571   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:26.153774   57270 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:26.153992   57270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0410 22:49:26.154010   57270 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-646133 && echo "no-preload-646133" | sudo tee /etc/hostname
	I0410 22:49:26.283760   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-646133
	
	I0410 22:49:26.283794   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.286605   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.286925   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.286955   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.287097   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:26.287277   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.287425   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.287551   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:26.287725   57270 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:26.287944   57270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0410 22:49:26.287969   57270 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-646133' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-646133/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-646133' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:49:26.402869   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:49:26.402905   57270 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:49:26.402945   57270 buildroot.go:174] setting up certificates
	I0410 22:49:26.402956   57270 provision.go:84] configureAuth start
	I0410 22:49:26.402973   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetMachineName
	I0410 22:49:26.403234   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetIP
	I0410 22:49:26.405718   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.406079   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.406119   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.406357   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.408549   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.408882   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.408917   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.409034   57270 provision.go:143] copyHostCerts
	I0410 22:49:26.409106   57270 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:49:26.409124   57270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:49:26.409177   57270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:49:26.409310   57270 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:49:26.409320   57270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:49:26.409341   57270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:49:26.409405   57270 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:49:26.409412   57270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:49:26.409430   57270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:49:26.409476   57270 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.no-preload-646133 san=[127.0.0.1 192.168.50.17 localhost minikube no-preload-646133]
	I0410 22:49:26.567556   57270 provision.go:177] copyRemoteCerts
	I0410 22:49:26.567611   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:49:26.567647   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.570205   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.570589   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.570614   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.570805   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:26.571034   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.571172   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:26.571294   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:49:26.655943   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:49:26.681691   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0410 22:49:26.706573   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0410 22:49:26.733054   57270 provision.go:87] duration metric: took 330.073783ms to configureAuth
	I0410 22:49:26.733088   57270 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:49:26.733276   57270 config.go:182] Loaded profile config "no-preload-646133": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.1
	I0410 22:49:26.733347   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.735910   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.736264   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.736295   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.736474   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:26.736648   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.736798   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.736925   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:26.737055   57270 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:26.737225   57270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0410 22:49:26.737241   57270 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:49:27.008174   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:49:27.008202   57270 machine.go:97] duration metric: took 970.785508ms to provisionDockerMachine
	I0410 22:49:27.008216   57270 start.go:293] postStartSetup for "no-preload-646133" (driver="kvm2")
	I0410 22:49:27.008236   57270 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:49:27.008263   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:27.008554   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:49:27.008580   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:27.011150   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.011561   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:27.011604   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.011900   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:27.012090   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:27.012274   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:27.012432   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:49:27.105247   57270 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:49:27.109842   57270 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:49:27.109868   57270 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:49:27.109927   57270 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:49:27.109993   57270 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:49:27.110080   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:49:27.121451   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:49:27.151797   57270 start.go:296] duration metric: took 143.569287ms for postStartSetup
	I0410 22:49:27.151836   57270 fix.go:56] duration metric: took 19.642403615s for fixHost
	I0410 22:49:27.151865   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:27.154454   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.154869   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:27.154903   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.154987   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:27.155193   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:27.155357   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:27.155512   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:27.155660   57270 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:27.155862   57270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0410 22:49:27.155875   57270 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 22:49:27.265609   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712789367.209761579
	
	I0410 22:49:27.265652   57270 fix.go:216] guest clock: 1712789367.209761579
	I0410 22:49:27.265662   57270 fix.go:229] Guest: 2024-04-10 22:49:27.209761579 +0000 UTC Remote: 2024-04-10 22:49:27.151840464 +0000 UTC m=+377.371052419 (delta=57.921115ms)
	I0410 22:49:27.265687   57270 fix.go:200] guest clock delta is within tolerance: 57.921115ms
	I0410 22:49:27.265697   57270 start.go:83] releasing machines lock for "no-preload-646133", held for 19.756293566s
	I0410 22:49:27.265724   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:27.265960   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetIP
	I0410 22:49:27.268735   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.269184   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:27.269216   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.269380   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:27.270014   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:27.270233   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:27.270331   57270 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:49:27.270376   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:27.270645   57270 ssh_runner.go:195] Run: cat /version.json
	I0410 22:49:27.270669   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:27.273542   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.273846   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.273986   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:27.274019   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.274140   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:27.274230   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:27.274259   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.274318   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:27.274400   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:27.274531   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:27.274536   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:27.274688   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:27.274723   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:49:27.274806   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:49:27.359922   57270 ssh_runner.go:195] Run: systemctl --version
	I0410 22:49:27.400885   57270 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:49:27.555260   57270 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 22:49:27.561275   57270 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:49:27.561333   57270 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:49:27.578478   57270 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0410 22:49:27.578502   57270 start.go:494] detecting cgroup driver to use...
	I0410 22:49:27.578567   57270 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:49:27.598020   57270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:49:27.613068   57270 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:49:27.613140   57270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:49:27.629253   57270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:49:27.644130   57270 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:49:27.791801   57270 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:49:27.952366   57270 docker.go:233] disabling docker service ...
	I0410 22:49:27.952477   57270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:49:27.968629   57270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:49:27.982330   57270 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:49:28.117396   57270 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:49:28.240808   57270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:49:28.257299   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:49:28.280918   57270 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0410 22:49:28.280991   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.296415   57270 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:49:28.296480   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.308602   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.319535   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.329812   57270 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:49:28.341466   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.354706   57270 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.374405   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.385094   57270 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:49:28.394412   57270 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0410 22:49:28.394466   57270 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0410 22:49:28.407654   57270 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:49:28.418381   57270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:49:28.525783   57270 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:49:28.678643   57270 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 22:49:28.678706   57270 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 22:49:28.683681   57270 start.go:562] Will wait 60s for crictl version
	I0410 22:49:28.683737   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:28.687703   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 22:49:28.725311   57270 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 22:49:28.725414   57270 ssh_runner.go:195] Run: crio --version
	I0410 22:49:28.755393   57270 ssh_runner.go:195] Run: crio --version
	I0410 22:49:28.788963   57270 out.go:177] * Preparing Kubernetes v1.30.0-rc.1 on CRI-O 1.29.1 ...
	I0410 22:49:28.790274   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetIP
	I0410 22:49:28.793091   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:28.793418   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:28.793452   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:28.793659   57270 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0410 22:49:28.798916   57270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:49:28.814575   57270 kubeadm.go:877] updating cluster {Name:no-preload-646133 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.1 ClusterName:no-preload-646133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.17 Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 22:49:28.814689   57270 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.1 and runtime crio
	I0410 22:49:28.814717   57270 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:49:28.852604   57270 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.1". assuming images are not preloaded.
	I0410 22:49:28.852627   57270 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-rc.1 registry.k8s.io/kube-controller-manager:v1.30.0-rc.1 registry.k8s.io/kube-scheduler:v1.30.0-rc.1 registry.k8s.io/kube-proxy:v1.30.0-rc.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0410 22:49:28.852698   57270 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:28.852707   57270 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.1
	I0410 22:49:28.852733   57270 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0410 22:49:28.852756   57270 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0410 22:49:28.852803   57270 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.1
	I0410 22:49:28.852870   57270 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.1
	I0410 22:49:28.852890   57270 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.1
	I0410 22:49:28.852917   57270 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0410 22:49:28.854348   57270 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.1
	I0410 22:49:28.854354   57270 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.1
	I0410 22:49:28.854378   57270 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0410 22:49:28.854419   57270 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0410 22:49:28.854421   57270 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.1
	I0410 22:49:28.854355   57270 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.1
	I0410 22:49:28.854353   57270 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:28.854740   57270 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0410 22:49:29.066608   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0410 22:49:29.072486   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-rc.1
	I0410 22:49:29.073347   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0410 22:49:29.075270   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0410 22:49:29.082649   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-rc.1
	I0410 22:49:29.085737   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-rc.1
	I0410 22:49:29.093699   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-rc.1
	I0410 22:49:29.290780   57270 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-rc.1" does not exist at hash "ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b" in container runtime
	I0410 22:49:29.290810   57270 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0410 22:49:29.290839   57270 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0410 22:49:29.290837   57270 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-rc.1
	I0410 22:49:29.290849   57270 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0410 22:49:29.290871   57270 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0410 22:49:29.290882   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.290902   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.290882   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.304346   57270 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-rc.1" does not exist at hash "69f89e5e13a4119f8752cd7f0fb30e3db4ce480a5e08580a3bf72597464bd061" in container runtime
	I0410 22:49:29.304409   57270 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-rc.1
	I0410 22:49:29.304459   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.304510   57270 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-rc.1" does not exist at hash "bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895" in container runtime
	I0410 22:49:29.304599   57270 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-rc.1
	I0410 22:49:29.304635   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.304563   57270 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-rc.1" does not exist at hash "577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090" in container runtime
	I0410 22:49:29.304689   57270 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.1
	I0410 22:49:29.304738   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.311219   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0410 22:49:29.311264   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.1
	I0410 22:49:29.311311   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0410 22:49:29.324663   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-rc.1
	I0410 22:49:29.324770   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.1
	I0410 22:49:29.324855   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-rc.1
	I0410 22:49:29.442426   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0410 22:49:29.442541   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0410 22:49:29.458416   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0410 22:49:29.458526   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0410 22:49:29.468890   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.1
	I0410 22:49:29.468998   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.1
	I0410 22:49:29.481365   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.1
	I0410 22:49:29.481482   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.1
	I0410 22:49:29.498862   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.1
	I0410 22:49:29.498899   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0410 22:49:29.498913   57270 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0410 22:49:29.498927   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.1
	I0410 22:49:29.498951   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.1 (exists)
	I0410 22:49:29.498957   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0410 22:49:29.498964   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.1
	I0410 22:49:29.498982   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-rc.1 (exists)
	I0410 22:49:29.499012   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.1
	I0410 22:49:29.498926   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0410 22:49:29.507249   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.1 (exists)
	I0410 22:49:29.507282   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.1 (exists)
	I0410 22:49:29.751612   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:25.585095   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:26.085119   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:26.585846   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:27.084920   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:27.585251   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:28.084926   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:28.585643   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:29.084937   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:29.585666   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:30.085088   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:29.518476   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:31.518837   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:28.592323   58701 node_ready.go:53] node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:31.098027   58701 node_ready.go:53] node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:31.591789   58701 node_ready.go:49] node "default-k8s-diff-port-519831" has status "Ready":"True"
	I0410 22:49:31.591822   58701 node_ready.go:38] duration metric: took 7.504383585s for node "default-k8s-diff-port-519831" to be "Ready" ...
	I0410 22:49:31.591835   58701 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:49:31.599103   58701 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-ghnvx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:31.607758   58701 pod_ready.go:92] pod "coredns-76f75df574-ghnvx" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:31.607787   58701 pod_ready.go:81] duration metric: took 8.655521ms for pod "coredns-76f75df574-ghnvx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:31.607801   58701 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:33.690936   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.191950196s)
	I0410 22:49:33.690965   57270 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.939318786s)
	I0410 22:49:33.691014   57270 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0410 22:49:33.691045   57270 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:33.690973   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0410 22:49:33.691091   57270 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.1
	I0410 22:49:33.691101   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:33.691163   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.1
	I0410 22:49:33.695868   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:30.585515   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:31.085273   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:31.585347   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:32.084892   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:32.585361   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:33.085648   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:33.585256   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:34.084938   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:34.585005   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:35.085466   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:34.018733   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:36.019904   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:33.615785   58701 pod_ready.go:102] pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:35.115811   58701 pod_ready.go:92] pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:35.115846   58701 pod_ready.go:81] duration metric: took 3.508038321s for pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:35.115856   58701 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.123593   58701 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:37.123624   58701 pod_ready.go:81] duration metric: took 2.007760022s for pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.123638   58701 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.130390   58701 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:37.130421   58701 pod_ready.go:81] duration metric: took 6.771239ms for pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.130436   58701 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5mbwx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.136219   58701 pod_ready.go:92] pod "kube-proxy-5mbwx" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:37.136253   58701 pod_ready.go:81] duration metric: took 5.809077ms for pod "kube-proxy-5mbwx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.136265   58701 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.142909   58701 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:37.142939   58701 pod_ready.go:81] duration metric: took 6.664922ms for pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.142954   58701 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:35.767190   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.1: (2.075997626s)
	I0410 22:49:35.767227   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.1 from cache
	I0410 22:49:35.767261   57270 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-rc.1
	I0410 22:49:35.767278   57270 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.071386498s)
	I0410 22:49:35.767326   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.1
	I0410 22:49:35.767327   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0410 22:49:35.767497   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0410 22:49:35.773679   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0410 22:49:37.666289   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.1: (1.898906389s)
	I0410 22:49:37.666326   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.1 from cache
	I0410 22:49:37.666358   57270 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0410 22:49:37.666422   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0410 22:49:39.652778   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.986322091s)
	I0410 22:49:39.652820   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0410 22:49:39.652855   57270 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.1
	I0410 22:49:39.652951   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.1
	I0410 22:49:35.585228   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:36.085699   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:36.585690   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:37.085760   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:37.584867   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:37.584947   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:37.625964   57719 cri.go:89] found id: ""
	I0410 22:49:37.625989   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.625996   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:37.626001   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:37.626046   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:37.669151   57719 cri.go:89] found id: ""
	I0410 22:49:37.669178   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.669188   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:37.669194   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:37.669242   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:37.711426   57719 cri.go:89] found id: ""
	I0410 22:49:37.711456   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.711466   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:37.711474   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:37.711538   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:37.754678   57719 cri.go:89] found id: ""
	I0410 22:49:37.754707   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.754719   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:37.754726   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:37.754809   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:37.795259   57719 cri.go:89] found id: ""
	I0410 22:49:37.795291   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.795301   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:37.795307   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:37.795375   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:37.836961   57719 cri.go:89] found id: ""
	I0410 22:49:37.836994   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.837004   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:37.837011   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:37.837075   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:37.876195   57719 cri.go:89] found id: ""
	I0410 22:49:37.876223   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.876233   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:37.876239   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:37.876290   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:37.911688   57719 cri.go:89] found id: ""
	I0410 22:49:37.911715   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.911725   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:37.911736   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:37.911751   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:37.954690   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:37.954734   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:38.006731   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:38.006771   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:38.024290   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:38.024314   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:38.148504   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:38.148529   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:38.148561   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:38.519483   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:40.520822   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:39.150543   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:41.151300   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:42.217749   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.1: (2.564772479s)
	I0410 22:49:42.217778   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.1 from cache
	I0410 22:49:42.217802   57270 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.1
	I0410 22:49:42.217843   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.1
	I0410 22:49:44.577826   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.1: (2.359955682s)
	I0410 22:49:44.577865   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.1 from cache
	I0410 22:49:44.577892   57270 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0410 22:49:44.577940   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0410 22:49:40.726314   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:40.743098   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:40.743168   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:40.794673   57719 cri.go:89] found id: ""
	I0410 22:49:40.794697   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.794704   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:40.794710   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:40.794756   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:40.836274   57719 cri.go:89] found id: ""
	I0410 22:49:40.836308   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.836319   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:40.836327   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:40.836408   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:40.882249   57719 cri.go:89] found id: ""
	I0410 22:49:40.882276   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.882285   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:40.882292   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:40.882357   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:40.925829   57719 cri.go:89] found id: ""
	I0410 22:49:40.925867   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.925878   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:40.925885   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:40.925936   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:40.978494   57719 cri.go:89] found id: ""
	I0410 22:49:40.978529   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.978540   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:40.978547   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:40.978611   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:41.020935   57719 cri.go:89] found id: ""
	I0410 22:49:41.020964   57719 logs.go:276] 0 containers: []
	W0410 22:49:41.020975   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:41.020982   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:41.021040   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:41.060779   57719 cri.go:89] found id: ""
	I0410 22:49:41.060812   57719 logs.go:276] 0 containers: []
	W0410 22:49:41.060824   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:41.060831   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:41.060885   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:41.119604   57719 cri.go:89] found id: ""
	I0410 22:49:41.119632   57719 logs.go:276] 0 containers: []
	W0410 22:49:41.119643   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:41.119653   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:41.119667   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:41.188739   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:41.188774   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:41.203682   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:41.203735   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:41.293423   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:41.293451   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:41.293468   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:41.366606   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:41.366649   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:43.914447   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:43.930350   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:43.930439   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:43.968867   57719 cri.go:89] found id: ""
	I0410 22:49:43.968921   57719 logs.go:276] 0 containers: []
	W0410 22:49:43.968932   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:43.968939   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:43.969012   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:44.010143   57719 cri.go:89] found id: ""
	I0410 22:49:44.010169   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.010181   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:44.010188   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:44.010264   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:44.048610   57719 cri.go:89] found id: ""
	I0410 22:49:44.048637   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.048645   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:44.048651   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:44.048697   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:44.105939   57719 cri.go:89] found id: ""
	I0410 22:49:44.105973   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.106001   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:44.106009   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:44.106086   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:44.149699   57719 cri.go:89] found id: ""
	I0410 22:49:44.149726   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.149735   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:44.149743   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:44.149803   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:44.193131   57719 cri.go:89] found id: ""
	I0410 22:49:44.193159   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.193167   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:44.193173   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:44.193255   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:44.233751   57719 cri.go:89] found id: ""
	I0410 22:49:44.233781   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.233789   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:44.233801   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:44.233868   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:44.284404   57719 cri.go:89] found id: ""
	I0410 22:49:44.284432   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.284441   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:44.284449   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:44.284461   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:44.330082   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:44.330118   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:44.383452   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:44.383487   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:44.399604   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:44.399632   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:44.476328   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:44.476368   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:44.476415   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:43.019922   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:45.519954   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:43.650596   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:45.651668   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:45.537183   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0410 22:49:45.537228   57270 cache_images.go:123] Successfully loaded all cached images
	I0410 22:49:45.537235   57270 cache_images.go:92] duration metric: took 16.68459637s to LoadCachedImages
	I0410 22:49:45.537249   57270 kubeadm.go:928] updating node { 192.168.50.17 8443 v1.30.0-rc.1 crio true true} ...
	I0410 22:49:45.537401   57270 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-646133 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.1 ClusterName:no-preload-646133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 22:49:45.537476   57270 ssh_runner.go:195] Run: crio config
	I0410 22:49:45.587002   57270 cni.go:84] Creating CNI manager for ""
	I0410 22:49:45.587031   57270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:49:45.587047   57270 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 22:49:45.587069   57270 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.17 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-646133 NodeName:no-preload-646133 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.17"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.17 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0410 22:49:45.587205   57270 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.17
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-646133"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.17
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.17"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 22:49:45.587272   57270 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.1
	I0410 22:49:45.600694   57270 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 22:49:45.600758   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 22:49:45.613884   57270 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0410 22:49:45.633871   57270 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0410 22:49:45.654733   57270 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0410 22:49:45.673976   57270 ssh_runner.go:195] Run: grep 192.168.50.17	control-plane.minikube.internal$ /etc/hosts
	I0410 22:49:45.678260   57270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.17	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:49:45.693499   57270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:49:45.819034   57270 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:49:45.838775   57270 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133 for IP: 192.168.50.17
	I0410 22:49:45.838799   57270 certs.go:194] generating shared ca certs ...
	I0410 22:49:45.838819   57270 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:49:45.839010   57270 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 22:49:45.839064   57270 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 22:49:45.839078   57270 certs.go:256] generating profile certs ...
	I0410 22:49:45.839175   57270 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/client.key
	I0410 22:49:45.839256   57270 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/apiserver.key.d257fb06
	I0410 22:49:45.839310   57270 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/proxy-client.key
	I0410 22:49:45.839480   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 22:49:45.839521   57270 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 22:49:45.839531   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 22:49:45.839551   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 22:49:45.839608   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 22:49:45.839633   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 22:49:45.839674   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:49:45.840315   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 22:49:45.897688   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 22:49:45.932242   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 22:49:45.979537   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 22:49:46.020562   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0410 22:49:46.057254   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0410 22:49:46.084070   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 22:49:46.112807   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0410 22:49:46.141650   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 22:49:46.170167   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 22:49:46.196917   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 22:49:46.222645   57270 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 22:49:46.242626   57270 ssh_runner.go:195] Run: openssl version
	I0410 22:49:46.249048   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 22:49:46.265110   57270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:46.270018   57270 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:46.270083   57270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:46.276298   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 22:49:46.288165   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 22:49:46.299040   57270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 22:49:46.303584   57270 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:49:46.303627   57270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 22:49:46.309278   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 22:49:46.319990   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 22:49:46.331654   57270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 22:49:46.336700   57270 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:49:46.336750   57270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 22:49:46.342767   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 22:49:46.355005   57270 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:49:46.359870   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0410 22:49:46.366270   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0410 22:49:46.372625   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0410 22:49:46.379270   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0410 22:49:46.386312   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0410 22:49:46.392796   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0410 22:49:46.399209   57270 kubeadm.go:391] StartCluster: {Name:no-preload-646133 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.1 ClusterName:no-preload-646133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.17 Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:49:46.399318   57270 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 22:49:46.399405   57270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:49:46.439061   57270 cri.go:89] found id: ""
	I0410 22:49:46.439149   57270 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0410 22:49:46.450243   57270 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0410 22:49:46.450265   57270 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0410 22:49:46.450271   57270 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0410 22:49:46.450323   57270 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0410 22:49:46.460553   57270 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0410 22:49:46.461608   57270 kubeconfig.go:125] found "no-preload-646133" server: "https://192.168.50.17:8443"
	I0410 22:49:46.464469   57270 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0410 22:49:46.474775   57270 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.17
	I0410 22:49:46.474808   57270 kubeadm.go:1154] stopping kube-system containers ...
	I0410 22:49:46.474820   57270 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0410 22:49:46.474860   57270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:49:46.514933   57270 cri.go:89] found id: ""
	I0410 22:49:46.515010   57270 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0410 22:49:46.533830   57270 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:49:46.547026   57270 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:49:46.547042   57270 kubeadm.go:156] found existing configuration files:
	
	I0410 22:49:46.547081   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:49:46.557093   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:49:46.557157   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:49:46.567102   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:49:46.576939   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:49:46.576998   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:49:46.586921   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:49:46.596189   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:49:46.596260   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:49:46.607803   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:49:46.618166   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:49:46.618240   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:49:46.628406   57270 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:49:46.638748   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:46.767824   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:48.028868   57270 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.261006059s)
	I0410 22:49:48.028907   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:48.253185   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:48.323164   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:48.404069   57270 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:49:48.404153   57270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:48.904557   57270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:49.404477   57270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:49.437891   57270 api_server.go:72] duration metric: took 1.033818826s to wait for apiserver process to appear ...
	I0410 22:49:49.437927   57270 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:49:49.437953   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:49.438623   57270 api_server.go:269] stopped: https://192.168.50.17:8443/healthz: Get "https://192.168.50.17:8443/healthz": dial tcp 192.168.50.17:8443: connect: connection refused
	I0410 22:49:47.054122   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:47.069583   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:47.069654   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:47.113953   57719 cri.go:89] found id: ""
	I0410 22:49:47.113981   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.113989   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:47.113995   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:47.114054   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:47.156770   57719 cri.go:89] found id: ""
	I0410 22:49:47.156798   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.156808   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:47.156814   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:47.156891   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:47.195227   57719 cri.go:89] found id: ""
	I0410 22:49:47.195252   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.195261   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:47.195266   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:47.195328   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:47.238109   57719 cri.go:89] found id: ""
	I0410 22:49:47.238138   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.238150   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:47.238157   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:47.238212   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:47.285062   57719 cri.go:89] found id: ""
	I0410 22:49:47.285093   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.285101   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:47.285108   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:47.285185   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:47.324635   57719 cri.go:89] found id: ""
	I0410 22:49:47.324663   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.324670   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:47.324676   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:47.324744   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:47.365404   57719 cri.go:89] found id: ""
	I0410 22:49:47.365437   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.365445   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:47.365468   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:47.365535   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:47.412296   57719 cri.go:89] found id: ""
	I0410 22:49:47.412335   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.412346   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:47.412367   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:47.412384   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:47.497998   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:47.498019   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:47.498033   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:47.590502   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:47.590536   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:47.647665   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:47.647692   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:47.697704   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:47.697741   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:50.213410   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:50.229408   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:50.229488   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:50.268514   57719 cri.go:89] found id: ""
	I0410 22:49:50.268545   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.268556   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:50.268563   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:50.268620   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:50.308733   57719 cri.go:89] found id: ""
	I0410 22:49:50.308762   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.308790   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:50.308796   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:50.308857   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:50.353929   57719 cri.go:89] found id: ""
	I0410 22:49:50.353966   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.353977   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:50.353985   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:50.354043   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:50.397979   57719 cri.go:89] found id: ""
	I0410 22:49:50.398009   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.398019   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:50.398026   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:50.398086   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:47.521284   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:50.018571   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:52.020874   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:48.151768   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:50.151820   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:49.939075   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:52.355813   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0410 22:49:52.355855   57270 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0410 22:49:52.355868   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:52.502702   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0410 22:49:52.502733   57270 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0410 22:49:52.502796   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:52.509360   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0410 22:49:52.509401   57270 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0410 22:49:52.939056   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:52.946114   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0410 22:49:52.946154   57270 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0410 22:49:53.438741   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:53.444154   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0410 22:49:53.444187   57270 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0410 22:49:53.938848   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:53.947578   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 200:
	ok
	I0410 22:49:53.956247   57270 api_server.go:141] control plane version: v1.30.0-rc.1
	I0410 22:49:53.956281   57270 api_server.go:131] duration metric: took 4.518344859s to wait for apiserver health ...
	I0410 22:49:53.956292   57270 cni.go:84] Creating CNI manager for ""
	I0410 22:49:53.956301   57270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:49:53.958053   57270 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0410 22:49:53.959420   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0410 22:49:53.973242   57270 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0410 22:49:54.004623   57270 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:49:54.024138   57270 system_pods.go:59] 8 kube-system pods found
	I0410 22:49:54.024185   57270 system_pods.go:61] "coredns-7db6d8ff4d-lbcp6" [1ff36529-d718-41e7-9b61-54ba32efab0e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0410 22:49:54.024195   57270 system_pods.go:61] "etcd-no-preload-646133" [a704a953-1418-4425-8ac1-272c632050c7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0410 22:49:54.024214   57270 system_pods.go:61] "kube-apiserver-no-preload-646133" [90d4ff18-767c-4dbf-b4ad-ff02cb3d542f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0410 22:49:54.024231   57270 system_pods.go:61] "kube-controller-manager-no-preload-646133" [82c0778e-690f-41a6-a57f-017ab79fd029] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0410 22:49:54.024243   57270 system_pods.go:61] "kube-proxy-v5fbl" [002efd18-4375-455b-9b4a-15bb739120e0] Running
	I0410 22:49:54.024252   57270 system_pods.go:61] "kube-scheduler-no-preload-646133" [fa9898bc-36a6-4cc4-91e6-bba4ccd22d9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0410 22:49:54.024264   57270 system_pods.go:61] "metrics-server-569cc877fc-pw276" [22de5c2f-13ab-4f69-8eb6-ec4a3c3d1e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:49:54.024277   57270 system_pods.go:61] "storage-provisioner" [1028921e-3924-4614-bcb6-f949c18e9e4e] Running
	I0410 22:49:54.024287   57270 system_pods.go:74] duration metric: took 19.638409ms to wait for pod list to return data ...
	I0410 22:49:54.024301   57270 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:49:54.031666   57270 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:49:54.031694   57270 node_conditions.go:123] node cpu capacity is 2
	I0410 22:49:54.031705   57270 node_conditions.go:105] duration metric: took 7.394201ms to run NodePressure ...
	I0410 22:49:54.031720   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:54.339352   57270 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0410 22:49:54.345115   57270 kubeadm.go:733] kubelet initialised
	I0410 22:49:54.345146   57270 kubeadm.go:734] duration metric: took 5.76519ms waiting for restarted kubelet to initialise ...
	I0410 22:49:54.345156   57270 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:49:54.352254   57270 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-lbcp6" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:50.436191   57719 cri.go:89] found id: ""
	I0410 22:49:50.436222   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.436234   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:50.436241   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:50.436316   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:50.476462   57719 cri.go:89] found id: ""
	I0410 22:49:50.476486   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.476494   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:50.476499   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:50.476557   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:50.520025   57719 cri.go:89] found id: ""
	I0410 22:49:50.520054   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.520063   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:50.520071   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:50.520127   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:50.564535   57719 cri.go:89] found id: ""
	I0410 22:49:50.564570   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.564581   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:50.564593   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:50.564624   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:50.620587   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:50.620629   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:50.634802   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:50.634832   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:50.707625   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:50.707655   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:50.707671   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:50.791935   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:50.791970   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:53.339109   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:53.361555   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:53.361632   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:53.428170   57719 cri.go:89] found id: ""
	I0410 22:49:53.428202   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.428212   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:53.428219   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:53.428281   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:53.501929   57719 cri.go:89] found id: ""
	I0410 22:49:53.501957   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.501968   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:53.501977   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:53.502055   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:53.548844   57719 cri.go:89] found id: ""
	I0410 22:49:53.548871   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.548890   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:53.548897   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:53.548949   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:53.595056   57719 cri.go:89] found id: ""
	I0410 22:49:53.595081   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.595090   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:53.595098   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:53.595153   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:53.638885   57719 cri.go:89] found id: ""
	I0410 22:49:53.638920   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.638938   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:53.638946   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:53.639046   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:53.685526   57719 cri.go:89] found id: ""
	I0410 22:49:53.685565   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.685573   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:53.685579   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:53.685650   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:53.725084   57719 cri.go:89] found id: ""
	I0410 22:49:53.725112   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.725119   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:53.725125   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:53.725172   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:53.767031   57719 cri.go:89] found id: ""
	I0410 22:49:53.767062   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.767072   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:53.767083   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:53.767103   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:53.826570   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:53.826618   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:53.843784   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:53.843822   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:53.926277   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:53.926299   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:53.926317   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:54.024735   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:54.024782   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:54.519305   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:56.520139   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:52.651382   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:55.149798   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:57.150803   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:56.359479   57270 pod_ready.go:102] pod "coredns-7db6d8ff4d-lbcp6" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:58.859341   57270 pod_ready.go:102] pod "coredns-7db6d8ff4d-lbcp6" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:56.586265   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:56.602113   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:56.602200   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:56.647041   57719 cri.go:89] found id: ""
	I0410 22:49:56.647074   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.647086   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:56.647094   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:56.647168   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:56.688053   57719 cri.go:89] found id: ""
	I0410 22:49:56.688086   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.688096   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:56.688104   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:56.688190   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:56.729176   57719 cri.go:89] found id: ""
	I0410 22:49:56.729210   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.729221   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:56.729229   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:56.729293   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:56.768877   57719 cri.go:89] found id: ""
	I0410 22:49:56.768905   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.768913   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:56.768919   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:56.768966   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:56.807228   57719 cri.go:89] found id: ""
	I0410 22:49:56.807274   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.807286   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:56.807294   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:56.807361   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:56.848183   57719 cri.go:89] found id: ""
	I0410 22:49:56.848216   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.848224   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:56.848230   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:56.848284   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:56.887894   57719 cri.go:89] found id: ""
	I0410 22:49:56.887923   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.887931   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:56.887937   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:56.887993   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:56.926908   57719 cri.go:89] found id: ""
	I0410 22:49:56.926935   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.926944   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:56.926952   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:56.926968   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:57.012614   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:57.012640   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:57.012657   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:57.098735   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:57.098784   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:57.140798   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:57.140831   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:57.204239   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:57.204283   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:59.720328   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:59.735964   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:59.736042   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:59.774351   57719 cri.go:89] found id: ""
	I0410 22:49:59.774383   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.774393   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:59.774407   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:59.774468   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:59.817222   57719 cri.go:89] found id: ""
	I0410 22:49:59.817248   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.817255   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:59.817260   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:59.817310   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:59.854551   57719 cri.go:89] found id: ""
	I0410 22:49:59.854582   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.854594   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:59.854602   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:59.854656   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:59.894334   57719 cri.go:89] found id: ""
	I0410 22:49:59.894367   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.894375   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:59.894381   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:59.894442   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:59.932446   57719 cri.go:89] found id: ""
	I0410 22:49:59.932472   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.932482   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:59.932489   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:59.932552   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:59.969168   57719 cri.go:89] found id: ""
	I0410 22:49:59.969193   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.969201   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:59.969209   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:59.969273   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:00.006918   57719 cri.go:89] found id: ""
	I0410 22:50:00.006960   57719 logs.go:276] 0 containers: []
	W0410 22:50:00.006972   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:00.006979   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:00.007036   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:00.050380   57719 cri.go:89] found id: ""
	I0410 22:50:00.050411   57719 logs.go:276] 0 containers: []
	W0410 22:50:00.050424   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:00.050433   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:00.050454   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:00.066340   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:00.066366   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:00.146454   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:00.146479   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:00.146494   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:00.231174   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:00.231225   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:00.278732   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:00.278759   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:59.020938   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:01.518584   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:59.151137   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:01.650307   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:01.359992   57270 pod_ready.go:92] pod "coredns-7db6d8ff4d-lbcp6" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:01.360021   57270 pod_ready.go:81] duration metric: took 7.007734788s for pod "coredns-7db6d8ff4d-lbcp6" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:01.360035   57270 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:02.867322   57270 pod_ready.go:92] pod "etcd-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:02.867349   57270 pod_ready.go:81] duration metric: took 1.507305949s for pod "etcd-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:02.867362   57270 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:02.833035   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:02.847316   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:02.847380   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:02.888793   57719 cri.go:89] found id: ""
	I0410 22:50:02.888821   57719 logs.go:276] 0 containers: []
	W0410 22:50:02.888832   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:02.888840   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:02.888897   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:02.926495   57719 cri.go:89] found id: ""
	I0410 22:50:02.926525   57719 logs.go:276] 0 containers: []
	W0410 22:50:02.926535   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:02.926542   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:02.926603   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:02.966185   57719 cri.go:89] found id: ""
	I0410 22:50:02.966217   57719 logs.go:276] 0 containers: []
	W0410 22:50:02.966227   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:02.966233   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:02.966295   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:03.007383   57719 cri.go:89] found id: ""
	I0410 22:50:03.007408   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.007414   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:03.007420   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:03.007490   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:03.044245   57719 cri.go:89] found id: ""
	I0410 22:50:03.044273   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.044281   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:03.044292   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:03.044367   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:03.078820   57719 cri.go:89] found id: ""
	I0410 22:50:03.078849   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.078859   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:03.078866   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:03.078927   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:03.117205   57719 cri.go:89] found id: ""
	I0410 22:50:03.117233   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.117244   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:03.117251   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:03.117313   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:03.155698   57719 cri.go:89] found id: ""
	I0410 22:50:03.155725   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.155735   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:03.155743   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:03.155758   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:03.231685   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:03.231712   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:03.231724   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:03.315122   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:03.315167   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:03.361151   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:03.361186   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:03.412134   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:03.412168   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:04.017523   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:06.024382   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:04.150291   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:06.151488   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:04.873656   57270 pod_ready.go:102] pod "kube-apiserver-no-preload-646133" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:05.874079   57270 pod_ready.go:92] pod "kube-apiserver-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:05.874106   57270 pod_ready.go:81] duration metric: took 3.006735064s for pod "kube-apiserver-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:05.874116   57270 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:07.880447   57270 pod_ready.go:102] pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:08.881209   57270 pod_ready.go:92] pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:08.881241   57270 pod_ready.go:81] duration metric: took 3.007117254s for pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:08.881271   57270 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v5fbl" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:08.887939   57270 pod_ready.go:92] pod "kube-proxy-v5fbl" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:08.887963   57270 pod_ready.go:81] duration metric: took 6.68304ms for pod "kube-proxy-v5fbl" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:08.887975   57270 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:08.894389   57270 pod_ready.go:92] pod "kube-scheduler-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:08.894415   57270 pod_ready.go:81] duration metric: took 6.43215ms for pod "kube-scheduler-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:08.894428   57270 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:05.928116   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:05.942237   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:05.942337   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:05.983813   57719 cri.go:89] found id: ""
	I0410 22:50:05.983842   57719 logs.go:276] 0 containers: []
	W0410 22:50:05.983853   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:05.983861   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:05.983945   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:06.024590   57719 cri.go:89] found id: ""
	I0410 22:50:06.024618   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.024626   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:06.024637   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:06.024698   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:06.063040   57719 cri.go:89] found id: ""
	I0410 22:50:06.063075   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.063087   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:06.063094   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:06.063160   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:06.102224   57719 cri.go:89] found id: ""
	I0410 22:50:06.102250   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.102259   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:06.102273   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:06.102342   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:06.144202   57719 cri.go:89] found id: ""
	I0410 22:50:06.144229   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.144236   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:06.144242   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:06.144288   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:06.189215   57719 cri.go:89] found id: ""
	I0410 22:50:06.189243   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.189250   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:06.189256   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:06.189308   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:06.225218   57719 cri.go:89] found id: ""
	I0410 22:50:06.225247   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.225258   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:06.225266   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:06.225330   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:06.265229   57719 cri.go:89] found id: ""
	I0410 22:50:06.265262   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.265273   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:06.265283   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:06.265306   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:06.279794   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:06.279825   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:06.348038   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:06.348063   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:06.348079   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:06.431293   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:06.431339   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:06.476033   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:06.476060   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:09.032099   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:09.046628   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:09.046765   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:09.086900   57719 cri.go:89] found id: ""
	I0410 22:50:09.086928   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.086936   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:09.086942   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:09.086998   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:09.124989   57719 cri.go:89] found id: ""
	I0410 22:50:09.125018   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.125028   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:09.125035   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:09.125096   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:09.163720   57719 cri.go:89] found id: ""
	I0410 22:50:09.163749   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.163761   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:09.163769   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:09.163822   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:09.203846   57719 cri.go:89] found id: ""
	I0410 22:50:09.203875   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.203883   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:09.203888   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:09.203945   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:09.242974   57719 cri.go:89] found id: ""
	I0410 22:50:09.243002   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.243016   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:09.243024   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:09.243092   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:09.278664   57719 cri.go:89] found id: ""
	I0410 22:50:09.278687   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.278694   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:09.278700   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:09.278762   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:09.313335   57719 cri.go:89] found id: ""
	I0410 22:50:09.313359   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.313367   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:09.313372   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:09.313419   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:09.351160   57719 cri.go:89] found id: ""
	I0410 22:50:09.351195   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.351206   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:09.351225   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:09.351239   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:09.425989   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:09.426015   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:09.426033   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:09.505189   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:09.505223   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:09.549619   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:09.549651   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:09.604322   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:09.604360   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:08.520115   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:11.018253   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:08.649190   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:10.650453   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:10.903726   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:13.401154   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:12.119780   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:12.135377   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:12.135458   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:12.178105   57719 cri.go:89] found id: ""
	I0410 22:50:12.178129   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.178138   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:12.178144   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:12.178207   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:12.217369   57719 cri.go:89] found id: ""
	I0410 22:50:12.217397   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.217409   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:12.217424   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:12.217488   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:12.254185   57719 cri.go:89] found id: ""
	I0410 22:50:12.254213   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.254222   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:12.254230   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:12.254291   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:12.295007   57719 cri.go:89] found id: ""
	I0410 22:50:12.295038   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.295048   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:12.295057   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:12.295125   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:12.334620   57719 cri.go:89] found id: ""
	I0410 22:50:12.334644   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.334651   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:12.334657   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:12.334707   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:12.371217   57719 cri.go:89] found id: ""
	I0410 22:50:12.371241   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.371249   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:12.371255   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:12.371302   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:12.409571   57719 cri.go:89] found id: ""
	I0410 22:50:12.409599   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.409608   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:12.409617   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:12.409675   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:12.453133   57719 cri.go:89] found id: ""
	I0410 22:50:12.453159   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.453169   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:12.453180   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:12.453194   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:12.505322   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:12.505360   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:12.520284   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:12.520315   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:12.608057   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:12.608082   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:12.608097   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:12.693240   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:12.693274   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:15.244628   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:15.261915   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:15.262020   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:15.302874   57719 cri.go:89] found id: ""
	I0410 22:50:15.302903   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.302910   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:15.302916   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:15.302973   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:15.347492   57719 cri.go:89] found id: ""
	I0410 22:50:15.347518   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.347527   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:15.347534   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:15.347598   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:15.394156   57719 cri.go:89] found id: ""
	I0410 22:50:15.394188   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.394198   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:15.394205   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:15.394265   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:13.518316   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:15.520507   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:13.150145   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:15.651083   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:15.401582   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:17.901179   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:15.437656   57719 cri.go:89] found id: ""
	I0410 22:50:15.437682   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.437690   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:15.437695   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:15.437748   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:15.475658   57719 cri.go:89] found id: ""
	I0410 22:50:15.475686   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.475697   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:15.475704   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:15.475765   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:15.517908   57719 cri.go:89] found id: ""
	I0410 22:50:15.517930   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.517937   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:15.517942   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:15.517991   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:15.560083   57719 cri.go:89] found id: ""
	I0410 22:50:15.560108   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.560117   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:15.560123   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:15.560178   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:15.603967   57719 cri.go:89] found id: ""
	I0410 22:50:15.603994   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.604002   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:15.604013   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:15.604028   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:15.659994   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:15.660029   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:15.675627   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:15.675658   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:15.761297   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:15.761320   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:15.761339   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:15.839225   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:15.839265   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:18.386062   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:18.399609   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:18.399677   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:18.443002   57719 cri.go:89] found id: ""
	I0410 22:50:18.443030   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.443040   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:18.443048   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:18.443106   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:18.485089   57719 cri.go:89] found id: ""
	I0410 22:50:18.485121   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.485132   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:18.485140   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:18.485200   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:18.524310   57719 cri.go:89] found id: ""
	I0410 22:50:18.524338   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.524347   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:18.524354   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:18.524412   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:18.563535   57719 cri.go:89] found id: ""
	I0410 22:50:18.563573   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.563582   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:18.563587   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:18.563634   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:18.600451   57719 cri.go:89] found id: ""
	I0410 22:50:18.600478   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.600487   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:18.600495   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:18.600562   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:18.640445   57719 cri.go:89] found id: ""
	I0410 22:50:18.640472   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.640480   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:18.640485   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:18.640550   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:18.677691   57719 cri.go:89] found id: ""
	I0410 22:50:18.677725   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.677746   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:18.677754   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:18.677817   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:18.716753   57719 cri.go:89] found id: ""
	I0410 22:50:18.716850   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.716876   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:18.716897   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:18.716918   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:18.804099   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:18.804130   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:18.804144   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:18.883569   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:18.883611   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:18.930014   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:18.930045   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:18.980029   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:18.980065   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:18.018924   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:20.020820   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:18.151029   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:20.650000   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:19.904069   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:22.401462   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:24.401892   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:21.495499   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:21.511001   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:21.511075   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:21.551469   57719 cri.go:89] found id: ""
	I0410 22:50:21.551511   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.551522   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:21.551540   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:21.551605   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:21.590539   57719 cri.go:89] found id: ""
	I0410 22:50:21.590570   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.590580   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:21.590587   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:21.590654   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:21.629005   57719 cri.go:89] found id: ""
	I0410 22:50:21.629030   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.629042   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:21.629048   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:21.629108   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:21.669745   57719 cri.go:89] found id: ""
	I0410 22:50:21.669767   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.669774   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:21.669780   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:21.669834   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:21.707806   57719 cri.go:89] found id: ""
	I0410 22:50:21.707831   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.707839   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:21.707844   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:21.707892   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:21.746698   57719 cri.go:89] found id: ""
	I0410 22:50:21.746727   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.746736   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:21.746742   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:21.746802   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:21.783048   57719 cri.go:89] found id: ""
	I0410 22:50:21.783070   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.783079   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:21.783084   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:21.783131   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:21.822457   57719 cri.go:89] found id: ""
	I0410 22:50:21.822484   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.822492   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:21.822500   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:21.822513   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:21.894706   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:21.894747   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:21.909861   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:21.909903   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:21.999344   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:21.999370   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:21.999386   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:22.080004   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:22.080042   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:24.620924   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:24.634937   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:24.634999   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:24.686619   57719 cri.go:89] found id: ""
	I0410 22:50:24.686644   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.686655   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:24.686662   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:24.686744   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:24.723632   57719 cri.go:89] found id: ""
	I0410 22:50:24.723658   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.723667   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:24.723675   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:24.723738   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:24.760708   57719 cri.go:89] found id: ""
	I0410 22:50:24.760739   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.760750   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:24.760757   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:24.760804   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:24.795680   57719 cri.go:89] found id: ""
	I0410 22:50:24.795712   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.795722   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:24.795729   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:24.795793   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:24.833033   57719 cri.go:89] found id: ""
	I0410 22:50:24.833063   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.833074   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:24.833082   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:24.833130   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:24.872840   57719 cri.go:89] found id: ""
	I0410 22:50:24.872864   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.872871   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:24.872877   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:24.872936   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:24.915640   57719 cri.go:89] found id: ""
	I0410 22:50:24.915678   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.915688   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:24.915696   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:24.915755   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:24.957164   57719 cri.go:89] found id: ""
	I0410 22:50:24.957207   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.957219   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:24.957230   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:24.957244   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:25.006551   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:25.006601   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:25.021623   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:25.021649   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:25.094699   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:25.094722   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:25.094741   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:25.181280   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:25.181316   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:22.518442   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:25.018206   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:22.650481   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:25.151162   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:26.904127   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:29.400642   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:27.723475   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:27.737294   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:27.737381   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:27.776098   57719 cri.go:89] found id: ""
	I0410 22:50:27.776126   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.776138   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:27.776146   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:27.776203   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:27.814324   57719 cri.go:89] found id: ""
	I0410 22:50:27.814352   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.814364   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:27.814371   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:27.814447   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:27.849573   57719 cri.go:89] found id: ""
	I0410 22:50:27.849603   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.849614   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:27.849621   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:27.849682   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:27.888904   57719 cri.go:89] found id: ""
	I0410 22:50:27.888932   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.888940   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:27.888946   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:27.888993   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:27.931772   57719 cri.go:89] found id: ""
	I0410 22:50:27.931800   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.931812   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:27.931821   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:27.931881   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:27.975633   57719 cri.go:89] found id: ""
	I0410 22:50:27.975666   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.975676   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:27.975684   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:27.975736   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:28.012251   57719 cri.go:89] found id: ""
	I0410 22:50:28.012280   57719 logs.go:276] 0 containers: []
	W0410 22:50:28.012290   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:28.012298   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:28.012364   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:28.048848   57719 cri.go:89] found id: ""
	I0410 22:50:28.048886   57719 logs.go:276] 0 containers: []
	W0410 22:50:28.048898   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:28.048908   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:28.048923   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:28.102215   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:28.102257   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:28.118052   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:28.118081   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:28.190738   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:28.190762   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:28.190777   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:28.269294   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:28.269330   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:27.519211   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:29.521111   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:32.017915   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:27.651922   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:30.150852   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:31.401210   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:33.902054   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:30.833927   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:30.848196   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:30.848266   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:30.886077   57719 cri.go:89] found id: ""
	I0410 22:50:30.886117   57719 logs.go:276] 0 containers: []
	W0410 22:50:30.886127   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:30.886133   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:30.886179   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:30.924638   57719 cri.go:89] found id: ""
	I0410 22:50:30.924668   57719 logs.go:276] 0 containers: []
	W0410 22:50:30.924678   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:30.924686   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:30.924762   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:30.961106   57719 cri.go:89] found id: ""
	I0410 22:50:30.961136   57719 logs.go:276] 0 containers: []
	W0410 22:50:30.961147   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:30.961154   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:30.961213   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:31.001374   57719 cri.go:89] found id: ""
	I0410 22:50:31.001412   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.001427   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:31.001434   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:31.001498   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:31.038928   57719 cri.go:89] found id: ""
	I0410 22:50:31.038961   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.038971   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:31.038980   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:31.039057   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:31.077033   57719 cri.go:89] found id: ""
	I0410 22:50:31.077067   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.077076   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:31.077083   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:31.077139   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:31.115227   57719 cri.go:89] found id: ""
	I0410 22:50:31.115257   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.115266   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:31.115273   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:31.115335   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:31.157339   57719 cri.go:89] found id: ""
	I0410 22:50:31.157372   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.157382   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:31.157393   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:31.157409   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:31.198742   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:31.198770   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:31.255388   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:31.255422   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:31.272018   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:31.272048   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:31.344503   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:31.344524   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:31.344541   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:33.925749   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:33.939402   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:33.939475   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:33.976070   57719 cri.go:89] found id: ""
	I0410 22:50:33.976093   57719 logs.go:276] 0 containers: []
	W0410 22:50:33.976100   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:33.976106   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:33.976172   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:34.013723   57719 cri.go:89] found id: ""
	I0410 22:50:34.013748   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.013758   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:34.013765   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:34.013821   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:34.062678   57719 cri.go:89] found id: ""
	I0410 22:50:34.062704   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.062712   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:34.062718   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:34.062774   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:34.123007   57719 cri.go:89] found id: ""
	I0410 22:50:34.123038   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.123046   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:34.123052   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:34.123096   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:34.188811   57719 cri.go:89] found id: ""
	I0410 22:50:34.188841   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.188852   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:34.188859   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:34.188949   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:34.223585   57719 cri.go:89] found id: ""
	I0410 22:50:34.223609   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.223618   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:34.223625   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:34.223680   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:34.260004   57719 cri.go:89] found id: ""
	I0410 22:50:34.260028   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.260036   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:34.260041   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:34.260096   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:34.303064   57719 cri.go:89] found id: ""
	I0410 22:50:34.303093   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.303104   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:34.303115   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:34.303134   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:34.359105   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:34.359142   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:34.375420   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:34.375450   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:34.449619   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:34.449645   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:34.449660   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:34.534214   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:34.534248   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:34.518609   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:37.016973   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:32.649917   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:34.661652   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:37.150648   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:36.401988   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:38.901505   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:37.076525   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:37.090789   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:37.090849   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:37.130848   57719 cri.go:89] found id: ""
	I0410 22:50:37.130881   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.130893   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:37.130900   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:37.130967   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:37.170158   57719 cri.go:89] found id: ""
	I0410 22:50:37.170181   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.170188   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:37.170194   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:37.170269   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:37.210238   57719 cri.go:89] found id: ""
	I0410 22:50:37.210264   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.210274   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:37.210282   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:37.210328   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:37.256763   57719 cri.go:89] found id: ""
	I0410 22:50:37.256789   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.256800   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:37.256807   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:37.256875   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:37.295323   57719 cri.go:89] found id: ""
	I0410 22:50:37.295355   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.295364   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:37.295372   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:37.295443   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:37.334066   57719 cri.go:89] found id: ""
	I0410 22:50:37.334094   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.334105   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:37.334113   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:37.334170   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:37.374428   57719 cri.go:89] found id: ""
	I0410 22:50:37.374458   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.374477   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:37.374485   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:37.374544   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:37.412114   57719 cri.go:89] found id: ""
	I0410 22:50:37.412142   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.412152   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:37.412161   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:37.412174   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:37.453693   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:37.453717   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:37.505484   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:37.505524   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:37.523645   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:37.523672   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:37.595107   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:37.595134   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:37.595150   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:40.180649   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:40.195168   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:40.195243   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:40.240130   57719 cri.go:89] found id: ""
	I0410 22:50:40.240160   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.240169   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:40.240175   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:40.240241   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:40.276366   57719 cri.go:89] found id: ""
	I0410 22:50:40.276390   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.276406   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:40.276412   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:40.276466   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:40.314991   57719 cri.go:89] found id: ""
	I0410 22:50:40.315016   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.315023   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:40.315029   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:40.315075   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:40.354301   57719 cri.go:89] found id: ""
	I0410 22:50:40.354331   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.354342   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:40.354349   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:40.354414   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:40.393093   57719 cri.go:89] found id: ""
	I0410 22:50:40.393125   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.393135   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:40.393143   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:40.393204   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:39.021170   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:41.518285   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:39.650047   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:42.150206   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:40.902024   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:42.904180   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:40.429641   57719 cri.go:89] found id: ""
	I0410 22:50:40.429665   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.429674   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:40.429680   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:40.429727   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:40.468184   57719 cri.go:89] found id: ""
	I0410 22:50:40.468213   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.468224   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:40.468232   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:40.468304   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:40.505586   57719 cri.go:89] found id: ""
	I0410 22:50:40.505616   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.505627   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:40.505637   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:40.505652   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:40.562078   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:40.562119   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:40.578135   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:40.578213   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:40.659018   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:40.659047   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:40.659061   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:40.746434   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:40.746478   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:43.287852   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:43.301797   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:43.301869   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:43.339778   57719 cri.go:89] found id: ""
	I0410 22:50:43.339813   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.339822   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:43.339829   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:43.339893   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:43.378716   57719 cri.go:89] found id: ""
	I0410 22:50:43.378748   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.378759   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:43.378767   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:43.378836   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:43.417128   57719 cri.go:89] found id: ""
	I0410 22:50:43.417152   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.417163   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:43.417171   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:43.417234   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:43.459577   57719 cri.go:89] found id: ""
	I0410 22:50:43.459608   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.459617   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:43.459623   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:43.459678   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:43.497519   57719 cri.go:89] found id: ""
	I0410 22:50:43.497551   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.497561   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:43.497566   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:43.497620   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:43.534400   57719 cri.go:89] found id: ""
	I0410 22:50:43.534433   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.534444   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:43.534451   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:43.534540   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:43.574213   57719 cri.go:89] found id: ""
	I0410 22:50:43.574242   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.574253   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:43.574283   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:43.574344   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:43.611078   57719 cri.go:89] found id: ""
	I0410 22:50:43.611106   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.611113   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:43.611121   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:43.611137   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:43.698166   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:43.698202   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:43.749368   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:43.749395   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:43.801584   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:43.801621   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:43.817012   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:43.817050   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:43.892325   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:43.518660   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:46.017804   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:44.650389   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:46.650560   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:45.401723   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:47.901852   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:46.393325   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:46.407985   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:46.408045   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:46.442704   57719 cri.go:89] found id: ""
	I0410 22:50:46.442735   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.442745   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:46.442753   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:46.442821   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:46.485582   57719 cri.go:89] found id: ""
	I0410 22:50:46.485611   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.485618   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:46.485625   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:46.485683   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:46.524199   57719 cri.go:89] found id: ""
	I0410 22:50:46.524227   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.524234   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:46.524240   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:46.524288   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:46.560655   57719 cri.go:89] found id: ""
	I0410 22:50:46.560685   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.560694   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:46.560701   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:46.560839   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:46.596617   57719 cri.go:89] found id: ""
	I0410 22:50:46.596646   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.596658   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:46.596666   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:46.596739   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:46.634316   57719 cri.go:89] found id: ""
	I0410 22:50:46.634339   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.634347   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:46.634352   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:46.634399   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:46.671466   57719 cri.go:89] found id: ""
	I0410 22:50:46.671493   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.671502   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:46.671509   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:46.671582   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:46.709228   57719 cri.go:89] found id: ""
	I0410 22:50:46.709254   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.709265   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:46.709275   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:46.709291   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:46.761329   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:46.761366   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:46.778265   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:46.778288   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:46.851092   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:46.851113   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:46.851125   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:46.929181   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:46.929223   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:49.471285   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:49.485474   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:49.485551   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:49.523799   57719 cri.go:89] found id: ""
	I0410 22:50:49.523826   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.523838   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:49.523846   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:49.523899   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:49.562102   57719 cri.go:89] found id: ""
	I0410 22:50:49.562129   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.562137   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:49.562143   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:49.562196   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:49.600182   57719 cri.go:89] found id: ""
	I0410 22:50:49.600204   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.600211   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:49.600216   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:49.600262   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:49.640002   57719 cri.go:89] found id: ""
	I0410 22:50:49.640028   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.640039   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:49.640047   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:49.640111   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:49.678815   57719 cri.go:89] found id: ""
	I0410 22:50:49.678847   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.678858   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:49.678866   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:49.678929   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:49.716933   57719 cri.go:89] found id: ""
	I0410 22:50:49.716959   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.716969   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:49.716976   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:49.717039   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:49.756018   57719 cri.go:89] found id: ""
	I0410 22:50:49.756050   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.756060   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:49.756068   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:49.756132   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:49.802066   57719 cri.go:89] found id: ""
	I0410 22:50:49.802094   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.802103   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:49.802110   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:49.802123   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:49.856363   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:49.856417   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:49.872297   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:49.872330   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:49.950152   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:49.950174   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:49.950185   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:50.031251   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:50.031291   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:48.517547   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:50.517942   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:49.150498   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:51.151491   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:50.401650   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:52.401866   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:52.574794   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:52.589052   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:52.589117   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:52.625911   57719 cri.go:89] found id: ""
	I0410 22:50:52.625941   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.625952   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:52.625960   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:52.626020   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:52.668749   57719 cri.go:89] found id: ""
	I0410 22:50:52.668773   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.668781   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:52.668787   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:52.668835   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:52.713420   57719 cri.go:89] found id: ""
	I0410 22:50:52.713447   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.713457   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:52.713473   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:52.713538   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:52.750265   57719 cri.go:89] found id: ""
	I0410 22:50:52.750294   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.750301   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:52.750307   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:52.750354   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:52.787552   57719 cri.go:89] found id: ""
	I0410 22:50:52.787586   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.787597   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:52.787604   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:52.787670   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:52.827988   57719 cri.go:89] found id: ""
	I0410 22:50:52.828013   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.828020   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:52.828026   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:52.828072   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:52.864115   57719 cri.go:89] found id: ""
	I0410 22:50:52.864144   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.864155   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:52.864161   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:52.864222   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:52.906673   57719 cri.go:89] found id: ""
	I0410 22:50:52.906702   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.906712   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:52.906723   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:52.906742   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:52.960842   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:52.960892   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:52.976084   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:52.976114   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:53.052612   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:53.052638   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:53.052656   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:53.132465   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:53.132518   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:53.018789   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:55.518169   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:53.154117   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:55.653267   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:54.903797   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:57.401445   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:55.676947   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:55.691098   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:55.691183   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:55.728711   57719 cri.go:89] found id: ""
	I0410 22:50:55.728740   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.728750   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:55.728758   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:55.728824   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:55.768540   57719 cri.go:89] found id: ""
	I0410 22:50:55.768568   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.768578   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:55.768584   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:55.768649   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:55.806901   57719 cri.go:89] found id: ""
	I0410 22:50:55.806928   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.806938   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:55.806945   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:55.807019   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:55.846777   57719 cri.go:89] found id: ""
	I0410 22:50:55.846807   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.846816   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:55.846822   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:55.846873   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:55.887143   57719 cri.go:89] found id: ""
	I0410 22:50:55.887172   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.887181   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:55.887186   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:55.887241   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:55.929008   57719 cri.go:89] found id: ""
	I0410 22:50:55.929032   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.929040   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:55.929046   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:55.929098   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:55.969496   57719 cri.go:89] found id: ""
	I0410 22:50:55.969526   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.969536   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:55.969544   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:55.969605   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:56.007786   57719 cri.go:89] found id: ""
	I0410 22:50:56.007818   57719 logs.go:276] 0 containers: []
	W0410 22:50:56.007828   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:56.007838   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:56.007854   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:56.061616   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:56.061653   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:56.078664   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:56.078689   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:56.165015   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:56.165037   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:56.165053   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:56.241928   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:56.241971   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:58.785955   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:58.799544   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:58.799604   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:58.837234   57719 cri.go:89] found id: ""
	I0410 22:50:58.837264   57719 logs.go:276] 0 containers: []
	W0410 22:50:58.837275   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:58.837283   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:58.837350   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:58.877818   57719 cri.go:89] found id: ""
	I0410 22:50:58.877854   57719 logs.go:276] 0 containers: []
	W0410 22:50:58.877861   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:58.877867   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:58.877921   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:58.919705   57719 cri.go:89] found id: ""
	I0410 22:50:58.919729   57719 logs.go:276] 0 containers: []
	W0410 22:50:58.919740   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:58.919747   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:58.919809   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:58.957995   57719 cri.go:89] found id: ""
	I0410 22:50:58.958020   57719 logs.go:276] 0 containers: []
	W0410 22:50:58.958029   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:58.958036   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:58.958091   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:58.999966   57719 cri.go:89] found id: ""
	I0410 22:50:58.999995   57719 logs.go:276] 0 containers: []
	W0410 22:50:59.000008   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:59.000016   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:59.000088   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:59.040516   57719 cri.go:89] found id: ""
	I0410 22:50:59.040541   57719 logs.go:276] 0 containers: []
	W0410 22:50:59.040552   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:59.040560   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:59.040623   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:59.078869   57719 cri.go:89] found id: ""
	I0410 22:50:59.078899   57719 logs.go:276] 0 containers: []
	W0410 22:50:59.078908   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:59.078913   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:59.078961   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:59.116637   57719 cri.go:89] found id: ""
	I0410 22:50:59.116663   57719 logs.go:276] 0 containers: []
	W0410 22:50:59.116670   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:59.116679   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:59.116697   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:59.195852   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:59.195892   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:59.243256   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:59.243282   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:59.299195   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:59.299263   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:59.314512   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:59.314537   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:59.386468   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:58.016995   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:00.018205   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:58.151543   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:00.650140   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:59.901858   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:01.902933   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:04.402128   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:01.886907   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:01.905169   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:01.905251   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:01.944154   57719 cri.go:89] found id: ""
	I0410 22:51:01.944187   57719 logs.go:276] 0 containers: []
	W0410 22:51:01.944198   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:01.944205   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:01.944268   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:01.982743   57719 cri.go:89] found id: ""
	I0410 22:51:01.982778   57719 logs.go:276] 0 containers: []
	W0410 22:51:01.982789   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:01.982797   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:01.982864   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:02.020072   57719 cri.go:89] found id: ""
	I0410 22:51:02.020094   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.020102   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:02.020159   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:02.020213   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:02.064250   57719 cri.go:89] found id: ""
	I0410 22:51:02.064273   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.064280   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:02.064286   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:02.064339   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:02.105013   57719 cri.go:89] found id: ""
	I0410 22:51:02.105045   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.105054   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:02.105060   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:02.105106   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:02.145664   57719 cri.go:89] found id: ""
	I0410 22:51:02.145689   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.145695   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:02.145701   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:02.145759   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:02.189752   57719 cri.go:89] found id: ""
	I0410 22:51:02.189831   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.189850   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:02.189857   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:02.189929   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:02.228315   57719 cri.go:89] found id: ""
	I0410 22:51:02.228347   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.228358   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:02.228374   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:02.228390   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:02.281425   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:02.281460   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:02.296003   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:02.296031   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:02.389572   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:02.389599   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:02.389613   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:02.475881   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:02.475916   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:05.022037   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:05.037242   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:05.037304   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:05.073656   57719 cri.go:89] found id: ""
	I0410 22:51:05.073687   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.073698   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:05.073705   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:05.073767   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:05.114321   57719 cri.go:89] found id: ""
	I0410 22:51:05.114348   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.114356   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:05.114361   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:05.114430   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:05.153119   57719 cri.go:89] found id: ""
	I0410 22:51:05.153156   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.153164   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:05.153170   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:05.153230   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:05.193393   57719 cri.go:89] found id: ""
	I0410 22:51:05.193420   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.193428   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:05.193433   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:05.193479   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:05.229826   57719 cri.go:89] found id: ""
	I0410 22:51:05.229853   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.229861   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:05.229867   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:05.229915   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:05.265511   57719 cri.go:89] found id: ""
	I0410 22:51:05.265544   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.265555   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:05.265562   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:05.265627   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:05.302257   57719 cri.go:89] found id: ""
	I0410 22:51:05.302287   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.302297   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:05.302305   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:05.302386   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:05.347344   57719 cri.go:89] found id: ""
	I0410 22:51:05.347372   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.347380   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:05.347388   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:05.347399   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:05.421796   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:05.421817   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:05.421829   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:02.521499   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:05.017660   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:07.017945   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:02.651104   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:05.150286   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:07.150565   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:06.402266   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:08.406456   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:05.501803   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:05.501839   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:05.549161   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:05.549195   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:05.599598   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:05.599633   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:08.115679   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:08.130273   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:08.130350   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:08.172302   57719 cri.go:89] found id: ""
	I0410 22:51:08.172328   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.172335   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:08.172342   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:08.172390   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:08.220789   57719 cri.go:89] found id: ""
	I0410 22:51:08.220812   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.220819   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:08.220825   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:08.220874   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:08.258299   57719 cri.go:89] found id: ""
	I0410 22:51:08.258328   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.258341   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:08.258349   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:08.258404   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:08.297698   57719 cri.go:89] found id: ""
	I0410 22:51:08.297726   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.297733   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:08.297739   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:08.297787   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:08.335564   57719 cri.go:89] found id: ""
	I0410 22:51:08.335595   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.335605   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:08.335613   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:08.335671   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:08.373340   57719 cri.go:89] found id: ""
	I0410 22:51:08.373367   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.373377   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:08.373384   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:08.373481   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:08.413961   57719 cri.go:89] found id: ""
	I0410 22:51:08.413984   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.413993   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:08.414001   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:08.414062   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:08.459449   57719 cri.go:89] found id: ""
	I0410 22:51:08.459481   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.459492   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:08.459505   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:08.459521   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:08.518061   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:08.518103   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:08.533653   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:08.533680   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:08.619882   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:08.619917   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:08.619932   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:08.696329   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:08.696364   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:09.518298   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:11.518877   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:09.650387   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:11.650614   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:10.902634   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:13.402009   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:11.256846   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:11.271521   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:11.271582   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:11.312829   57719 cri.go:89] found id: ""
	I0410 22:51:11.312851   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.312869   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:11.312876   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:11.312930   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:11.355183   57719 cri.go:89] found id: ""
	I0410 22:51:11.355210   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.355220   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:11.355227   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:11.355287   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:11.394345   57719 cri.go:89] found id: ""
	I0410 22:51:11.394376   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.394388   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:11.394396   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:11.394460   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:11.434128   57719 cri.go:89] found id: ""
	I0410 22:51:11.434155   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.434163   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:11.434169   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:11.434219   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:11.473160   57719 cri.go:89] found id: ""
	I0410 22:51:11.473189   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.473201   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:11.473208   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:11.473278   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:11.513782   57719 cri.go:89] found id: ""
	I0410 22:51:11.513815   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.513826   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:11.513835   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:11.513891   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:11.556057   57719 cri.go:89] found id: ""
	I0410 22:51:11.556085   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.556093   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:11.556100   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:11.556147   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:11.594557   57719 cri.go:89] found id: ""
	I0410 22:51:11.594579   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.594586   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:11.594594   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:11.594609   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:11.672795   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:11.672841   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:11.716011   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:11.716046   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:11.769372   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:11.769413   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:11.784589   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:11.784617   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:11.857051   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:14.358019   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:14.372116   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:14.372192   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:14.412020   57719 cri.go:89] found id: ""
	I0410 22:51:14.412049   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.412061   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:14.412068   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:14.412128   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:14.450317   57719 cri.go:89] found id: ""
	I0410 22:51:14.450349   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.450360   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:14.450368   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:14.450426   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:14.509080   57719 cri.go:89] found id: ""
	I0410 22:51:14.509104   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.509110   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:14.509116   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:14.509185   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:14.561540   57719 cri.go:89] found id: ""
	I0410 22:51:14.561572   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.561583   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:14.561590   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:14.561670   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:14.622498   57719 cri.go:89] found id: ""
	I0410 22:51:14.622528   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.622538   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:14.622546   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:14.622606   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:14.678451   57719 cri.go:89] found id: ""
	I0410 22:51:14.678481   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.678490   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:14.678498   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:14.678560   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:14.720264   57719 cri.go:89] found id: ""
	I0410 22:51:14.720302   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.720315   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:14.720323   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:14.720388   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:14.758039   57719 cri.go:89] found id: ""
	I0410 22:51:14.758063   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.758071   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:14.758079   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:14.758090   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:14.808111   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:14.808171   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:14.825444   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:14.825487   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:14.906859   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:14.906884   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:14.906899   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:14.995176   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:14.995225   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:14.017397   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:16.017624   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:14.149898   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:16.150320   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:15.901542   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:17.902391   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:17.541159   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:17.556679   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:17.556749   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:17.595839   57719 cri.go:89] found id: ""
	I0410 22:51:17.595869   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.595880   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:17.595895   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:17.595954   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:17.633921   57719 cri.go:89] found id: ""
	I0410 22:51:17.633947   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.633957   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:17.633964   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:17.634033   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:17.673467   57719 cri.go:89] found id: ""
	I0410 22:51:17.673493   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.673501   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:17.673507   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:17.673554   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:17.709631   57719 cri.go:89] found id: ""
	I0410 22:51:17.709660   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.709670   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:17.709679   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:17.709739   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:17.760852   57719 cri.go:89] found id: ""
	I0410 22:51:17.760880   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.760893   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:17.760908   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:17.760969   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:17.798074   57719 cri.go:89] found id: ""
	I0410 22:51:17.798099   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.798108   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:17.798117   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:17.798178   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:17.835807   57719 cri.go:89] found id: ""
	I0410 22:51:17.835839   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.835854   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:17.835863   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:17.835935   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:17.876812   57719 cri.go:89] found id: ""
	I0410 22:51:17.876846   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.876856   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:17.876868   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:17.876882   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:17.891121   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:17.891149   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:17.966241   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:17.966264   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:17.966277   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:18.042633   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:18.042667   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:18.088294   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:18.088327   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:18.518103   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:20.519397   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:18.650784   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:21.150770   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:20.403127   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:22.901329   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:20.647016   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:20.662573   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:20.662640   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:20.701147   57719 cri.go:89] found id: ""
	I0410 22:51:20.701173   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.701184   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:20.701191   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:20.701252   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:20.739005   57719 cri.go:89] found id: ""
	I0410 22:51:20.739038   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.739049   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:20.739057   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:20.739112   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:20.776335   57719 cri.go:89] found id: ""
	I0410 22:51:20.776365   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.776379   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:20.776386   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:20.776471   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:20.814755   57719 cri.go:89] found id: ""
	I0410 22:51:20.814789   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.814800   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:20.814808   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:20.814867   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:20.853872   57719 cri.go:89] found id: ""
	I0410 22:51:20.853897   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.853904   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:20.853910   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:20.853958   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:20.891616   57719 cri.go:89] found id: ""
	I0410 22:51:20.891648   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.891656   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:20.891662   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:20.891710   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:20.930285   57719 cri.go:89] found id: ""
	I0410 22:51:20.930316   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.930326   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:20.930341   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:20.930398   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:20.967857   57719 cri.go:89] found id: ""
	I0410 22:51:20.967894   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.967904   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:20.967913   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:20.967934   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:21.053166   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:21.053201   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:21.098860   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:21.098888   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:21.150395   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:21.150430   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:21.164707   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:21.164737   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:21.251010   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:23.751441   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:23.769949   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:23.770014   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:23.809652   57719 cri.go:89] found id: ""
	I0410 22:51:23.809678   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.809686   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:23.809692   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:23.809740   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:23.847331   57719 cri.go:89] found id: ""
	I0410 22:51:23.847364   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.847374   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:23.847383   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:23.847445   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:23.889459   57719 cri.go:89] found id: ""
	I0410 22:51:23.889488   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.889498   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:23.889505   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:23.889564   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:23.932683   57719 cri.go:89] found id: ""
	I0410 22:51:23.932712   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.932720   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:23.932727   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:23.932787   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:23.974161   57719 cri.go:89] found id: ""
	I0410 22:51:23.974187   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.974194   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:23.974200   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:23.974253   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:24.013058   57719 cri.go:89] found id: ""
	I0410 22:51:24.013087   57719 logs.go:276] 0 containers: []
	W0410 22:51:24.013098   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:24.013106   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:24.013169   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:24.052556   57719 cri.go:89] found id: ""
	I0410 22:51:24.052582   57719 logs.go:276] 0 containers: []
	W0410 22:51:24.052590   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:24.052596   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:24.052643   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:24.089940   57719 cri.go:89] found id: ""
	I0410 22:51:24.089967   57719 logs.go:276] 0 containers: []
	W0410 22:51:24.089974   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:24.089982   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:24.089992   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:24.133198   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:24.133226   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:24.186615   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:24.186651   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:24.200559   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:24.200586   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:24.277061   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:24.277093   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:24.277109   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:23.016887   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:25.018325   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:27.018514   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:23.650669   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:26.149198   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:24.901704   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:26.902227   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:28.902337   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:26.855354   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:26.870269   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:26.870329   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:26.910056   57719 cri.go:89] found id: ""
	I0410 22:51:26.910084   57719 logs.go:276] 0 containers: []
	W0410 22:51:26.910094   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:26.910101   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:26.910163   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:26.949646   57719 cri.go:89] found id: ""
	I0410 22:51:26.949674   57719 logs.go:276] 0 containers: []
	W0410 22:51:26.949684   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:26.949690   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:26.949759   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:26.990945   57719 cri.go:89] found id: ""
	I0410 22:51:26.990970   57719 logs.go:276] 0 containers: []
	W0410 22:51:26.990977   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:26.990984   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:26.991053   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:27.029464   57719 cri.go:89] found id: ""
	I0410 22:51:27.029491   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.029500   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:27.029505   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:27.029562   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:27.072194   57719 cri.go:89] found id: ""
	I0410 22:51:27.072235   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.072260   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:27.072270   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:27.072339   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:27.106942   57719 cri.go:89] found id: ""
	I0410 22:51:27.106969   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.106979   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:27.106985   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:27.107045   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:27.144851   57719 cri.go:89] found id: ""
	I0410 22:51:27.144885   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.144894   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:27.144909   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:27.144970   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:27.188138   57719 cri.go:89] found id: ""
	I0410 22:51:27.188166   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.188178   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:27.188189   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:27.188204   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:27.241911   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:27.241943   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:27.255296   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:27.255322   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:27.327638   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:27.327663   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:27.327678   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:27.409048   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:27.409083   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:29.960093   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:29.975583   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:29.975647   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:30.018120   57719 cri.go:89] found id: ""
	I0410 22:51:30.018149   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.018159   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:30.018166   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:30.018225   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:30.055487   57719 cri.go:89] found id: ""
	I0410 22:51:30.055511   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.055518   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:30.055524   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:30.055573   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:30.093723   57719 cri.go:89] found id: ""
	I0410 22:51:30.093749   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.093756   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:30.093761   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:30.093808   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:30.138278   57719 cri.go:89] found id: ""
	I0410 22:51:30.138306   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.138317   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:30.138324   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:30.138385   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:30.174454   57719 cri.go:89] found id: ""
	I0410 22:51:30.174484   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.174495   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:30.174502   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:30.174573   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:30.213189   57719 cri.go:89] found id: ""
	I0410 22:51:30.213214   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.213221   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:30.213227   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:30.213272   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:30.253264   57719 cri.go:89] found id: ""
	I0410 22:51:30.253294   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.253304   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:30.253309   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:30.253357   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:30.289729   57719 cri.go:89] found id: ""
	I0410 22:51:30.289755   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.289767   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:30.289777   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:30.289793   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:30.303387   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:30.303416   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:30.381294   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:30.381315   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:30.381331   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:29.019226   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:31.519681   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:28.150621   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:30.649807   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:30.903662   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:33.401827   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:30.468072   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:30.468110   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:30.508761   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:30.508794   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:33.061654   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:33.077072   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:33.077146   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:33.113753   57719 cri.go:89] found id: ""
	I0410 22:51:33.113781   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.113791   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:33.113798   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:33.113848   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:33.149212   57719 cri.go:89] found id: ""
	I0410 22:51:33.149238   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.149249   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:33.149256   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:33.149321   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:33.185619   57719 cri.go:89] found id: ""
	I0410 22:51:33.185649   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.185659   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:33.185667   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:33.185725   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:33.222270   57719 cri.go:89] found id: ""
	I0410 22:51:33.222301   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.222313   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:33.222320   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:33.222375   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:33.258594   57719 cri.go:89] found id: ""
	I0410 22:51:33.258624   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.258636   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:33.258642   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:33.258689   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:33.298326   57719 cri.go:89] found id: ""
	I0410 22:51:33.298360   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.298368   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:33.298374   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:33.298438   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:33.337407   57719 cri.go:89] found id: ""
	I0410 22:51:33.337438   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.337449   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:33.337456   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:33.337520   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:33.374971   57719 cri.go:89] found id: ""
	I0410 22:51:33.375003   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.375014   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:33.375024   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:33.375039   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:33.415256   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:33.415288   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:33.467895   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:33.467929   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:33.484604   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:33.484639   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:33.562267   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:33.562288   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:33.562299   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:34.017685   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:36.519093   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:32.650396   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:35.150200   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:35.902810   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:38.401463   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:36.142628   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:36.157825   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:36.157883   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:36.199418   57719 cri.go:89] found id: ""
	I0410 22:51:36.199446   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.199456   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:36.199463   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:36.199523   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:36.238136   57719 cri.go:89] found id: ""
	I0410 22:51:36.238166   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.238174   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:36.238180   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:36.238229   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:36.273995   57719 cri.go:89] found id: ""
	I0410 22:51:36.274026   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.274037   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:36.274049   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:36.274110   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:36.311007   57719 cri.go:89] found id: ""
	I0410 22:51:36.311039   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.311049   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:36.311057   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:36.311122   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:36.351062   57719 cri.go:89] found id: ""
	I0410 22:51:36.351086   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.351093   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:36.351099   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:36.351152   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:36.388660   57719 cri.go:89] found id: ""
	I0410 22:51:36.388689   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.388703   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:36.388711   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:36.388762   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:36.428715   57719 cri.go:89] found id: ""
	I0410 22:51:36.428753   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.428761   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:36.428767   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:36.428831   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:36.467186   57719 cri.go:89] found id: ""
	I0410 22:51:36.467213   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.467220   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:36.467228   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:36.467239   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:36.521831   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:36.521860   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:36.536929   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:36.536957   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:36.614624   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:36.614647   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:36.614659   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:36.694604   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:36.694646   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:39.240039   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:39.255177   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:39.255262   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:39.293063   57719 cri.go:89] found id: ""
	I0410 22:51:39.293091   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.293113   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:39.293120   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:39.293181   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:39.331603   57719 cri.go:89] found id: ""
	I0410 22:51:39.331631   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.331639   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:39.331645   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:39.331697   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:39.372881   57719 cri.go:89] found id: ""
	I0410 22:51:39.372908   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.372919   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:39.372926   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:39.372987   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:39.417399   57719 cri.go:89] found id: ""
	I0410 22:51:39.417425   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.417435   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:39.417442   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:39.417503   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:39.458836   57719 cri.go:89] found id: ""
	I0410 22:51:39.458868   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.458877   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:39.458882   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:39.458932   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:39.496436   57719 cri.go:89] found id: ""
	I0410 22:51:39.496460   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.496467   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:39.496474   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:39.496532   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:39.534649   57719 cri.go:89] found id: ""
	I0410 22:51:39.534681   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.534690   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:39.534695   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:39.534754   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:39.571677   57719 cri.go:89] found id: ""
	I0410 22:51:39.571698   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.571705   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:39.571714   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:39.571725   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:39.621445   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:39.621482   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:39.676341   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:39.676382   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:39.691543   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:39.691573   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:39.769452   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:39.769477   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:39.769493   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:39.017483   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:41.020027   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:37.651534   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:40.151404   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:40.401635   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:42.401931   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:44.401972   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:42.350823   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:42.367124   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:42.367199   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:42.407511   57719 cri.go:89] found id: ""
	I0410 22:51:42.407545   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.407554   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:42.407560   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:42.407622   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:42.442913   57719 cri.go:89] found id: ""
	I0410 22:51:42.442948   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.442958   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:42.442964   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:42.443027   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:42.480747   57719 cri.go:89] found id: ""
	I0410 22:51:42.480777   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.480786   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:42.480792   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:42.480846   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:42.521610   57719 cri.go:89] found id: ""
	I0410 22:51:42.521635   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.521644   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:42.521651   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:42.521698   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:42.561076   57719 cri.go:89] found id: ""
	I0410 22:51:42.561108   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.561119   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:42.561127   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:42.561189   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:42.598034   57719 cri.go:89] found id: ""
	I0410 22:51:42.598059   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.598066   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:42.598072   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:42.598129   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:42.637051   57719 cri.go:89] found id: ""
	I0410 22:51:42.637085   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.637095   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:42.637103   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:42.637162   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:42.676051   57719 cri.go:89] found id: ""
	I0410 22:51:42.676084   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.676094   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:42.676105   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:42.676120   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:42.719607   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:42.719634   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:42.770791   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:42.770829   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:42.785704   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:42.785730   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:42.876445   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:42.876475   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:42.876490   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:43.518453   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:46.019450   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:42.650486   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:44.650894   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:47.150370   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:46.901358   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:48.902417   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:45.458721   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:45.474125   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:45.474203   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:45.511105   57719 cri.go:89] found id: ""
	I0410 22:51:45.511143   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.511153   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:45.511161   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:45.511220   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:45.552891   57719 cri.go:89] found id: ""
	I0410 22:51:45.552916   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.552924   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:45.552930   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:45.552986   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:45.592423   57719 cri.go:89] found id: ""
	I0410 22:51:45.592458   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.592474   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:45.592481   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:45.592542   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:45.630964   57719 cri.go:89] found id: ""
	I0410 22:51:45.631009   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.631026   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:45.631033   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:45.631098   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:45.669557   57719 cri.go:89] found id: ""
	I0410 22:51:45.669586   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.669595   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:45.669602   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:45.669702   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:45.706359   57719 cri.go:89] found id: ""
	I0410 22:51:45.706387   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.706395   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:45.706402   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:45.706463   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:45.743301   57719 cri.go:89] found id: ""
	I0410 22:51:45.743330   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.743337   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:45.743343   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:45.743390   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:45.781679   57719 cri.go:89] found id: ""
	I0410 22:51:45.781703   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.781711   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:45.781718   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:45.781730   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:45.835251   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:45.835286   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:45.849255   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:45.849284   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:45.918404   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:45.918436   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:45.918452   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:45.999556   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:45.999591   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:48.546421   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:48.561243   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:48.561314   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:48.618335   57719 cri.go:89] found id: ""
	I0410 22:51:48.618361   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.618369   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:48.618375   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:48.618445   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:48.656116   57719 cri.go:89] found id: ""
	I0410 22:51:48.656151   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.656160   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:48.656167   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:48.656222   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:48.694846   57719 cri.go:89] found id: ""
	I0410 22:51:48.694874   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.694884   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:48.694897   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:48.694971   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:48.731988   57719 cri.go:89] found id: ""
	I0410 22:51:48.732020   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.732031   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:48.732039   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:48.732102   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:48.768595   57719 cri.go:89] found id: ""
	I0410 22:51:48.768627   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.768636   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:48.768643   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:48.768708   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:48.807263   57719 cri.go:89] found id: ""
	I0410 22:51:48.807292   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.807302   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:48.807308   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:48.807366   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:48.845291   57719 cri.go:89] found id: ""
	I0410 22:51:48.845317   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.845325   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:48.845329   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:48.845399   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:48.891056   57719 cri.go:89] found id: ""
	I0410 22:51:48.891081   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.891091   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:48.891102   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:48.891117   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:48.931963   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:48.931992   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:48.985539   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:48.985579   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:49.000685   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:49.000716   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:49.076097   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:49.076127   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:49.076143   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:48.517879   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:51.018479   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:49.150511   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:51.650519   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:51.400971   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:53.401596   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:51.663336   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:51.678249   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:51.678315   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:51.720062   57719 cri.go:89] found id: ""
	I0410 22:51:51.720088   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.720096   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:51.720103   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:51.720164   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:51.766351   57719 cri.go:89] found id: ""
	I0410 22:51:51.766387   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.766395   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:51.766401   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:51.766448   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:51.813037   57719 cri.go:89] found id: ""
	I0410 22:51:51.813068   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.813080   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:51.813087   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:51.813150   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:51.849232   57719 cri.go:89] found id: ""
	I0410 22:51:51.849262   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.849273   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:51.849280   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:51.849346   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:51.886392   57719 cri.go:89] found id: ""
	I0410 22:51:51.886415   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.886422   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:51.886428   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:51.886485   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:51.930859   57719 cri.go:89] found id: ""
	I0410 22:51:51.930896   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.930905   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:51.930913   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:51.930978   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:51.970403   57719 cri.go:89] found id: ""
	I0410 22:51:51.970501   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.970524   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:51.970533   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:51.970599   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:52.008281   57719 cri.go:89] found id: ""
	I0410 22:51:52.008311   57719 logs.go:276] 0 containers: []
	W0410 22:51:52.008322   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:52.008333   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:52.008347   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:52.060623   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:52.060656   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:52.075529   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:52.075559   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:52.158330   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:52.158356   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:52.158371   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:52.236356   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:52.236392   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:54.782448   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:54.796928   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:54.796997   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:54.836297   57719 cri.go:89] found id: ""
	I0410 22:51:54.836326   57719 logs.go:276] 0 containers: []
	W0410 22:51:54.836335   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:54.836341   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:54.836390   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:54.873501   57719 cri.go:89] found id: ""
	I0410 22:51:54.873532   57719 logs.go:276] 0 containers: []
	W0410 22:51:54.873540   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:54.873547   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:54.873617   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:54.914200   57719 cri.go:89] found id: ""
	I0410 22:51:54.914227   57719 logs.go:276] 0 containers: []
	W0410 22:51:54.914238   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:54.914247   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:54.914308   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:54.958654   57719 cri.go:89] found id: ""
	I0410 22:51:54.958682   57719 logs.go:276] 0 containers: []
	W0410 22:51:54.958693   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:54.958702   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:54.958761   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:55.017032   57719 cri.go:89] found id: ""
	I0410 22:51:55.017078   57719 logs.go:276] 0 containers: []
	W0410 22:51:55.017090   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:55.017101   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:55.017167   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:55.093024   57719 cri.go:89] found id: ""
	I0410 22:51:55.093059   57719 logs.go:276] 0 containers: []
	W0410 22:51:55.093070   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:55.093085   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:55.093156   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:55.142412   57719 cri.go:89] found id: ""
	I0410 22:51:55.142441   57719 logs.go:276] 0 containers: []
	W0410 22:51:55.142456   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:55.142464   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:55.142521   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:55.180116   57719 cri.go:89] found id: ""
	I0410 22:51:55.180147   57719 logs.go:276] 0 containers: []
	W0410 22:51:55.180159   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:55.180169   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:55.180186   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:55.249118   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:55.249139   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:55.249153   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:55.327558   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:55.327597   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:55.373127   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:55.373163   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:53.518589   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:56.017080   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:54.151372   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:56.650238   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:55.401716   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:57.902174   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:55.431602   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:55.431647   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:57.947559   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:57.962916   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:57.962983   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:58.000955   57719 cri.go:89] found id: ""
	I0410 22:51:58.000983   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.000990   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:58.000997   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:58.001049   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:58.040556   57719 cri.go:89] found id: ""
	I0410 22:51:58.040579   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.040586   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:58.040592   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:58.040649   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:58.079121   57719 cri.go:89] found id: ""
	I0410 22:51:58.079148   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.079155   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:58.079161   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:58.079240   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:58.119876   57719 cri.go:89] found id: ""
	I0410 22:51:58.119902   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.119914   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:58.119929   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:58.119987   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:58.160130   57719 cri.go:89] found id: ""
	I0410 22:51:58.160162   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.160173   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:58.160181   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:58.160258   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:58.198162   57719 cri.go:89] found id: ""
	I0410 22:51:58.198195   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.198207   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:58.198215   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:58.198266   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:58.235049   57719 cri.go:89] found id: ""
	I0410 22:51:58.235078   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.235089   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:58.235096   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:58.235157   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:58.275786   57719 cri.go:89] found id: ""
	I0410 22:51:58.275825   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.275845   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:58.275856   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:58.275872   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:58.316246   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:58.316277   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:58.371614   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:58.371649   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:58.386610   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:58.386646   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:58.465167   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:58.465187   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:58.465199   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:58.018362   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:00.517710   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:59.152119   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:01.650566   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:00.401148   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:02.401494   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:04.401624   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:01.049405   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:01.073251   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:01.073328   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:01.125169   57719 cri.go:89] found id: ""
	I0410 22:52:01.125201   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.125212   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:01.125220   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:01.125289   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:01.171256   57719 cri.go:89] found id: ""
	I0410 22:52:01.171289   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.171300   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:01.171308   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:01.171376   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:01.210444   57719 cri.go:89] found id: ""
	I0410 22:52:01.210478   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.210489   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:01.210503   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:01.210568   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:01.252448   57719 cri.go:89] found id: ""
	I0410 22:52:01.252473   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.252480   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:01.252486   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:01.252531   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:01.293084   57719 cri.go:89] found id: ""
	I0410 22:52:01.293117   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.293128   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:01.293136   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:01.293208   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:01.330992   57719 cri.go:89] found id: ""
	I0410 22:52:01.331019   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.331026   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:01.331032   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:01.331081   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:01.369286   57719 cri.go:89] found id: ""
	I0410 22:52:01.369315   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.369325   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:01.369331   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:01.369378   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:01.409888   57719 cri.go:89] found id: ""
	I0410 22:52:01.409916   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.409924   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:01.409933   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:01.409944   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:01.484535   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:01.484557   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:01.484569   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:01.565727   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:01.565778   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:01.606987   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:01.607018   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:01.659492   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:01.659529   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:04.174971   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:04.190302   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:04.190382   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:04.230050   57719 cri.go:89] found id: ""
	I0410 22:52:04.230080   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.230090   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:04.230097   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:04.230162   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:04.269870   57719 cri.go:89] found id: ""
	I0410 22:52:04.269902   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.269908   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:04.269914   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:04.269969   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:04.310977   57719 cri.go:89] found id: ""
	I0410 22:52:04.311008   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.311019   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:04.311026   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:04.311096   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:04.349108   57719 cri.go:89] found id: ""
	I0410 22:52:04.349136   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.349147   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:04.349154   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:04.349216   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:04.389590   57719 cri.go:89] found id: ""
	I0410 22:52:04.389613   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.389625   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:04.389633   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:04.389697   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:04.432962   57719 cri.go:89] found id: ""
	I0410 22:52:04.432989   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.433001   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:04.433008   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:04.433070   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:04.473912   57719 cri.go:89] found id: ""
	I0410 22:52:04.473946   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.473955   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:04.473960   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:04.474029   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:04.516157   57719 cri.go:89] found id: ""
	I0410 22:52:04.516182   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.516192   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:04.516203   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:04.516218   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:04.569047   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:04.569082   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:04.622639   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:04.622673   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:04.638441   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:04.638470   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:04.718203   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:04.718227   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:04.718241   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:02.518104   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:04.519509   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:06.519648   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:04.150041   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:06.150157   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:06.902111   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:08.902816   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:07.302147   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:07.315919   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:07.315984   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:07.354692   57719 cri.go:89] found id: ""
	I0410 22:52:07.354723   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.354733   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:07.354740   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:07.354803   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:07.393418   57719 cri.go:89] found id: ""
	I0410 22:52:07.393447   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.393459   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:07.393466   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:07.393525   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:07.436810   57719 cri.go:89] found id: ""
	I0410 22:52:07.436837   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.436847   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:07.436855   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:07.436920   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:07.478685   57719 cri.go:89] found id: ""
	I0410 22:52:07.478709   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.478720   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:07.478735   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:07.478792   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:07.515699   57719 cri.go:89] found id: ""
	I0410 22:52:07.515727   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.515737   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:07.515744   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:07.515805   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:07.556419   57719 cri.go:89] found id: ""
	I0410 22:52:07.556443   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.556451   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:07.556457   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:07.556560   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:07.598076   57719 cri.go:89] found id: ""
	I0410 22:52:07.598106   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.598113   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:07.598119   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:07.598183   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:07.637778   57719 cri.go:89] found id: ""
	I0410 22:52:07.637814   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.637826   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:07.637839   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:07.637854   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:07.693688   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:07.693728   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:07.709256   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:07.709289   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:07.778519   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:07.778544   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:07.778584   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:07.858937   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:07.858973   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:10.405765   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:10.422019   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:10.422083   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:09.017771   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:11.017883   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:08.151568   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:10.650989   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:11.402181   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:13.902520   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:10.463779   57719 cri.go:89] found id: ""
	I0410 22:52:10.463818   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.463829   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:10.463836   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:10.463923   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:10.503680   57719 cri.go:89] found id: ""
	I0410 22:52:10.503710   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.503718   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:10.503736   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:10.503804   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:10.545567   57719 cri.go:89] found id: ""
	I0410 22:52:10.545594   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.545605   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:10.545613   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:10.545671   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:10.590864   57719 cri.go:89] found id: ""
	I0410 22:52:10.590892   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.590901   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:10.590908   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:10.590968   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:10.634628   57719 cri.go:89] found id: ""
	I0410 22:52:10.634659   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.634670   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:10.634677   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:10.634758   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:10.681477   57719 cri.go:89] found id: ""
	I0410 22:52:10.681507   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.681526   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:10.681533   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:10.681585   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:10.725203   57719 cri.go:89] found id: ""
	I0410 22:52:10.725229   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.725328   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:10.725368   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:10.725443   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:10.764994   57719 cri.go:89] found id: ""
	I0410 22:52:10.765028   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.765036   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:10.765044   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:10.765094   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:10.808981   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:10.809012   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:10.866429   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:10.866468   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:10.882512   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:10.882537   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:10.963016   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:10.963041   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:10.963053   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:13.544552   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:13.558161   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:13.558238   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:13.596945   57719 cri.go:89] found id: ""
	I0410 22:52:13.596977   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.596988   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:13.596996   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:13.597057   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:13.637920   57719 cri.go:89] found id: ""
	I0410 22:52:13.637944   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.637951   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:13.637958   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:13.638012   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:13.676777   57719 cri.go:89] found id: ""
	I0410 22:52:13.676808   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.676819   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:13.676826   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:13.676887   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:13.714054   57719 cri.go:89] found id: ""
	I0410 22:52:13.714078   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.714086   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:13.714091   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:13.714142   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:13.757162   57719 cri.go:89] found id: ""
	I0410 22:52:13.757194   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.757206   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:13.757214   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:13.757276   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:13.793578   57719 cri.go:89] found id: ""
	I0410 22:52:13.793616   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.793629   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:13.793636   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:13.793697   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:13.831307   57719 cri.go:89] found id: ""
	I0410 22:52:13.831336   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.831346   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:13.831353   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:13.831400   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:13.872072   57719 cri.go:89] found id: ""
	I0410 22:52:13.872109   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.872117   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:13.872127   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:13.872143   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:13.926909   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:13.926947   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:13.943095   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:13.943126   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:14.015301   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:14.015336   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:14.015351   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:14.101100   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:14.101137   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:13.019599   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:15.517932   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:13.150248   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:15.650269   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:16.401396   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:18.402384   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:16.650213   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:16.664603   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:16.664677   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:16.701498   57719 cri.go:89] found id: ""
	I0410 22:52:16.701527   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.701539   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:16.701547   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:16.701618   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:16.740687   57719 cri.go:89] found id: ""
	I0410 22:52:16.740716   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.740725   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:16.740730   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:16.740789   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:16.777349   57719 cri.go:89] found id: ""
	I0410 22:52:16.777372   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.777380   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:16.777385   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:16.777454   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:16.819855   57719 cri.go:89] found id: ""
	I0410 22:52:16.819890   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.819900   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:16.819909   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:16.819973   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:16.859939   57719 cri.go:89] found id: ""
	I0410 22:52:16.859970   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.859981   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:16.859991   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:16.860056   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:16.897861   57719 cri.go:89] found id: ""
	I0410 22:52:16.897886   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.897893   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:16.897899   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:16.897962   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:16.935642   57719 cri.go:89] found id: ""
	I0410 22:52:16.935673   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.935681   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:16.935687   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:16.935733   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:16.974268   57719 cri.go:89] found id: ""
	I0410 22:52:16.974294   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.974302   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:16.974311   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:16.974327   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:17.027850   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:17.027888   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:17.043343   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:17.043379   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:17.120945   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:17.120967   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:17.120979   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:17.204831   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:17.204868   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:19.749712   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:19.764102   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:19.764181   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:19.800759   57719 cri.go:89] found id: ""
	I0410 22:52:19.800787   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.800795   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:19.800801   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:19.800851   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:19.839678   57719 cri.go:89] found id: ""
	I0410 22:52:19.839711   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.839723   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:19.839730   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:19.839791   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:19.876983   57719 cri.go:89] found id: ""
	I0410 22:52:19.877007   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.877015   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:19.877020   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:19.877081   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:19.918139   57719 cri.go:89] found id: ""
	I0410 22:52:19.918167   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.918177   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:19.918186   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:19.918243   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:19.954770   57719 cri.go:89] found id: ""
	I0410 22:52:19.954808   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.954818   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:19.954825   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:19.954881   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:19.993643   57719 cri.go:89] found id: ""
	I0410 22:52:19.993670   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.993680   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:19.993687   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:19.993746   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:20.030466   57719 cri.go:89] found id: ""
	I0410 22:52:20.030494   57719 logs.go:276] 0 containers: []
	W0410 22:52:20.030503   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:20.030510   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:20.030575   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:20.069264   57719 cri.go:89] found id: ""
	I0410 22:52:20.069291   57719 logs.go:276] 0 containers: []
	W0410 22:52:20.069299   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:20.069307   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:20.069318   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:20.117354   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:20.117382   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:20.170758   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:20.170800   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:20.187014   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:20.187055   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:20.269620   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:20.269645   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:20.269661   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:17.518440   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:20.018602   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:18.151102   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:20.151664   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:20.901836   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:23.401655   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:22.844841   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:22.861923   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:22.861983   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:22.907972   57719 cri.go:89] found id: ""
	I0410 22:52:22.908000   57719 logs.go:276] 0 containers: []
	W0410 22:52:22.908010   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:22.908017   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:22.908081   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:22.949822   57719 cri.go:89] found id: ""
	I0410 22:52:22.949851   57719 logs.go:276] 0 containers: []
	W0410 22:52:22.949861   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:22.949869   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:22.949935   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:22.989872   57719 cri.go:89] found id: ""
	I0410 22:52:22.989895   57719 logs.go:276] 0 containers: []
	W0410 22:52:22.989902   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:22.989908   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:22.989959   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:23.031881   57719 cri.go:89] found id: ""
	I0410 22:52:23.031900   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.031908   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:23.031913   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:23.031978   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:23.071691   57719 cri.go:89] found id: ""
	I0410 22:52:23.071719   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.071726   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:23.071732   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:23.071792   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:23.109961   57719 cri.go:89] found id: ""
	I0410 22:52:23.109990   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.110001   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:23.110009   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:23.110069   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:23.152955   57719 cri.go:89] found id: ""
	I0410 22:52:23.152979   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.152986   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:23.152991   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:23.153054   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:23.191883   57719 cri.go:89] found id: ""
	I0410 22:52:23.191924   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.191935   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:23.191947   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:23.191959   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:23.232692   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:23.232731   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:23.283648   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:23.283684   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:23.297701   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:23.297729   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:23.381657   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:23.381673   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:23.381685   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:22.520899   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:25.016955   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:27.018541   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:22.650053   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:25.150370   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:25.402084   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:27.402670   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:25.961531   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:25.977539   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:25.977639   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:26.021844   57719 cri.go:89] found id: ""
	I0410 22:52:26.021875   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.021886   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:26.021893   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:26.021954   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:26.064286   57719 cri.go:89] found id: ""
	I0410 22:52:26.064316   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.064327   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:26.064335   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:26.064394   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:26.104381   57719 cri.go:89] found id: ""
	I0410 22:52:26.104426   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.104437   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:26.104445   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:26.104522   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:26.143382   57719 cri.go:89] found id: ""
	I0410 22:52:26.143407   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.143417   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:26.143424   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:26.143489   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:26.179609   57719 cri.go:89] found id: ""
	I0410 22:52:26.179635   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.179646   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:26.179652   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:26.179714   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:26.217660   57719 cri.go:89] found id: ""
	I0410 22:52:26.217689   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.217695   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:26.217701   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:26.217758   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:26.254914   57719 cri.go:89] found id: ""
	I0410 22:52:26.254946   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.254956   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:26.254963   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:26.255047   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:26.293738   57719 cri.go:89] found id: ""
	I0410 22:52:26.293769   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.293779   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:26.293790   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:26.293809   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:26.366700   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:26.366725   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:26.366741   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:26.445143   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:26.445183   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:26.493175   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:26.493203   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:26.554952   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:26.554992   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:29.072225   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:29.087075   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:29.087150   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:29.131314   57719 cri.go:89] found id: ""
	I0410 22:52:29.131345   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.131357   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:29.131365   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:29.131427   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:29.169263   57719 cri.go:89] found id: ""
	I0410 22:52:29.169289   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.169298   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:29.169304   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:29.169357   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:29.209535   57719 cri.go:89] found id: ""
	I0410 22:52:29.209559   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.209570   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:29.209575   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:29.209630   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:29.251172   57719 cri.go:89] found id: ""
	I0410 22:52:29.251225   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.251233   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:29.251238   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:29.251290   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:29.296142   57719 cri.go:89] found id: ""
	I0410 22:52:29.296169   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.296179   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:29.296185   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:29.296245   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:29.336910   57719 cri.go:89] found id: ""
	I0410 22:52:29.336933   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.336940   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:29.336946   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:29.337003   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:29.396332   57719 cri.go:89] found id: ""
	I0410 22:52:29.396371   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.396382   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:29.396390   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:29.396475   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:29.438301   57719 cri.go:89] found id: ""
	I0410 22:52:29.438332   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.438340   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:29.438348   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:29.438360   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:29.482687   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:29.482711   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:29.535115   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:29.535146   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:29.551736   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:29.551760   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:29.624162   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:29.624198   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:29.624213   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:29.517873   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:31.519737   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:27.650947   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:29.651296   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:32.150101   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:29.901370   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:31.902050   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:34.401849   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:32.204355   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:32.218239   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:32.218310   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:32.255412   57719 cri.go:89] found id: ""
	I0410 22:52:32.255440   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.255451   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:32.255458   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:32.255516   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:32.293553   57719 cri.go:89] found id: ""
	I0410 22:52:32.293580   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.293591   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:32.293604   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:32.293663   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:32.332814   57719 cri.go:89] found id: ""
	I0410 22:52:32.332846   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.332855   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:32.332862   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:32.332924   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:32.371312   57719 cri.go:89] found id: ""
	I0410 22:52:32.371347   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.371368   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:32.371376   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:32.371441   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:32.407630   57719 cri.go:89] found id: ""
	I0410 22:52:32.407652   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.407659   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:32.407664   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:32.407720   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:32.444878   57719 cri.go:89] found id: ""
	I0410 22:52:32.444904   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.444914   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:32.444923   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:32.444989   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:32.490540   57719 cri.go:89] found id: ""
	I0410 22:52:32.490567   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.490578   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:32.490586   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:32.490644   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:32.528911   57719 cri.go:89] found id: ""
	I0410 22:52:32.528953   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.528961   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:32.528969   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:32.528979   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:32.608601   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:32.608626   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:32.608641   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:32.684840   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:32.684876   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:32.728092   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:32.728132   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:32.778491   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:32.778524   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:35.296228   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:35.310615   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:35.310705   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:35.377585   57719 cri.go:89] found id: ""
	I0410 22:52:35.377612   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.377623   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:35.377632   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:35.377692   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:35.417734   57719 cri.go:89] found id: ""
	I0410 22:52:35.417775   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.417796   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:35.417803   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:35.417864   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:34.017119   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:36.017526   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:34.150859   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:36.151112   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:36.402036   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:38.402201   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:35.456256   57719 cri.go:89] found id: ""
	I0410 22:52:35.456281   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.456291   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:35.456298   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:35.456382   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:35.495233   57719 cri.go:89] found id: ""
	I0410 22:52:35.495257   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.495267   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:35.495274   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:35.495333   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:35.535239   57719 cri.go:89] found id: ""
	I0410 22:52:35.535273   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.535284   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:35.535292   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:35.535352   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:35.571601   57719 cri.go:89] found id: ""
	I0410 22:52:35.571628   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.571638   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:35.571645   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:35.571708   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:35.612008   57719 cri.go:89] found id: ""
	I0410 22:52:35.612036   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.612045   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:35.612051   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:35.612099   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:35.649029   57719 cri.go:89] found id: ""
	I0410 22:52:35.649057   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.649065   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:35.649073   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:35.649084   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:35.702630   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:35.702668   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:35.718404   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:35.718433   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:35.798380   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:35.798405   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:35.798420   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:35.874049   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:35.874085   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:38.416265   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:38.430921   57719 kubeadm.go:591] duration metric: took 4m3.090666464s to restartPrimaryControlPlane
	W0410 22:52:38.431006   57719 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0410 22:52:38.431030   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0410 22:52:41.138973   57719 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.707913754s)
	I0410 22:52:41.139063   57719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:52:41.155646   57719 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:52:41.166345   57719 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:52:41.176443   57719 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:52:41.176481   57719 kubeadm.go:156] found existing configuration files:
	
	I0410 22:52:41.176547   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:52:41.186887   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:52:41.186960   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:52:41.199740   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:52:41.209843   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:52:41.209901   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:52:41.219804   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:52:41.229739   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:52:41.229807   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:52:41.240127   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:52:41.249763   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:52:41.249824   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:52:41.260148   57719 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 22:52:41.334127   57719 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0410 22:52:41.334200   57719 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 22:52:41.506104   57719 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 22:52:41.506307   57719 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 22:52:41.506488   57719 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 22:52:41.715227   57719 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 22:52:38.519180   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:41.018674   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:38.649983   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:41.152610   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:41.717460   57719 out.go:204]   - Generating certificates and keys ...
	I0410 22:52:41.717564   57719 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 22:52:41.717654   57719 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 22:52:41.717781   57719 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0410 22:52:41.717898   57719 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0410 22:52:41.718004   57719 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0410 22:52:41.718099   57719 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0410 22:52:41.718203   57719 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0410 22:52:41.718550   57719 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0410 22:52:41.719083   57719 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0410 22:52:41.719413   57719 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0410 22:52:41.719571   57719 kubeadm.go:309] [certs] Using the existing "sa" key
	I0410 22:52:41.719675   57719 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 22:52:41.998202   57719 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 22:52:42.109508   57719 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 22:52:42.315545   57719 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 22:52:42.448910   57719 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 22:52:42.465903   57719 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 22:52:42.467312   57719 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 22:52:42.467387   57719 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 22:52:42.636790   57719 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 22:52:40.402237   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:42.404435   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:42.638969   57719 out.go:204]   - Booting up control plane ...
	I0410 22:52:42.639106   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 22:52:42.652152   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 22:52:42.653843   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 22:52:42.654719   57719 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 22:52:42.658006   57719 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0410 22:52:43.518416   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:46.017894   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:43.650778   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:46.149976   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:44.902059   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:46.902549   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:49.401695   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:48.517833   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:51.018924   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:48.150825   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:50.151391   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:51.901096   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:53.902619   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:53.518616   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:55.519254   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:52.649783   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:54.651766   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:56.655687   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:55.903916   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:58.400789   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:58.017685   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:00.517303   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:59.152346   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:01.651146   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:00.901531   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:03.400690   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:02.517569   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:04.517775   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:07.017655   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:03.651728   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:05.652505   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:05.901605   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:07.902363   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:09.018576   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:11.510820   58186 pod_ready.go:81] duration metric: took 4m0.000124062s for pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace to be "Ready" ...
	E0410 22:53:11.510861   58186 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0410 22:53:11.510885   58186 pod_ready.go:38] duration metric: took 4m10.548289153s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:53:11.510918   58186 kubeadm.go:591] duration metric: took 4m18.480793797s to restartPrimaryControlPlane
	W0410 22:53:11.510993   58186 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0410 22:53:11.511019   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0410 22:53:08.151155   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:10.151358   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:10.400722   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:12.401658   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:14.401745   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:12.652391   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:14.652682   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:17.149892   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:16.900482   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:18.900789   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:19.152154   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:21.649975   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:20.902068   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:23.401500   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:22.660165   57719 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0410 22:53:22.660260   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:53:22.660520   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:53:23.653457   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:26.149469   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:25.903070   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:28.400947   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:27.660705   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:53:27.660919   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:53:28.150895   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:30.650254   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:30.401054   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:32.401994   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:32.654427   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:35.149580   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:37.150506   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:37.150533   58701 pod_ready.go:81] duration metric: took 4m0.00757056s for pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace to be "Ready" ...
	E0410 22:53:37.150544   58701 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0410 22:53:37.150552   58701 pod_ready.go:38] duration metric: took 4m5.55870495s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:53:37.150570   58701 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:53:37.150602   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:53:37.150659   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:53:37.213472   58701 cri.go:89] found id: "74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:37.213499   58701 cri.go:89] found id: ""
	I0410 22:53:37.213511   58701 logs.go:276] 1 containers: [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c]
	I0410 22:53:37.213561   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.218928   58701 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:53:37.218997   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:53:37.260045   58701 cri.go:89] found id: "34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:37.260066   58701 cri.go:89] found id: ""
	I0410 22:53:37.260073   58701 logs.go:276] 1 containers: [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072]
	I0410 22:53:37.260116   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.265329   58701 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:53:37.265393   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:53:37.306649   58701 cri.go:89] found id: "d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:37.306674   58701 cri.go:89] found id: ""
	I0410 22:53:37.306682   58701 logs.go:276] 1 containers: [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3]
	I0410 22:53:37.306729   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.311163   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:53:37.311213   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:53:37.351855   58701 cri.go:89] found id: "b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:37.351883   58701 cri.go:89] found id: ""
	I0410 22:53:37.351890   58701 logs.go:276] 1 containers: [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14]
	I0410 22:53:37.351937   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.356427   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:53:37.356497   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:53:34.900998   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:36.901173   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:39.400680   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:37.661409   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:53:37.661698   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:53:37.399224   58701 cri.go:89] found id: "7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:37.399248   58701 cri.go:89] found id: ""
	I0410 22:53:37.399257   58701 logs.go:276] 1 containers: [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b]
	I0410 22:53:37.399315   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.404314   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:53:37.404380   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:53:37.444169   58701 cri.go:89] found id: "c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:37.444196   58701 cri.go:89] found id: ""
	I0410 22:53:37.444205   58701 logs.go:276] 1 containers: [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39]
	I0410 22:53:37.444264   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.448618   58701 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:53:37.448693   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:53:37.487481   58701 cri.go:89] found id: ""
	I0410 22:53:37.487507   58701 logs.go:276] 0 containers: []
	W0410 22:53:37.487514   58701 logs.go:278] No container was found matching "kindnet"
	I0410 22:53:37.487519   58701 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0410 22:53:37.487566   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0410 22:53:37.531000   58701 cri.go:89] found id: "3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:37.531018   58701 cri.go:89] found id: "912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:37.531022   58701 cri.go:89] found id: ""
	I0410 22:53:37.531029   58701 logs.go:276] 2 containers: [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7]
	I0410 22:53:37.531081   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.535679   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.539974   58701 logs.go:123] Gathering logs for kubelet ...
	I0410 22:53:37.539998   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:53:37.601043   58701 logs.go:123] Gathering logs for dmesg ...
	I0410 22:53:37.601086   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:53:37.616427   58701 logs.go:123] Gathering logs for kube-apiserver [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c] ...
	I0410 22:53:37.616458   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:37.669951   58701 logs.go:123] Gathering logs for etcd [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072] ...
	I0410 22:53:37.669983   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:37.716243   58701 logs.go:123] Gathering logs for kube-controller-manager [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39] ...
	I0410 22:53:37.716273   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:37.774644   58701 logs.go:123] Gathering logs for storage-provisioner [912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7] ...
	I0410 22:53:37.774678   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:37.821033   58701 logs.go:123] Gathering logs for container status ...
	I0410 22:53:37.821077   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:53:37.883644   58701 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:53:37.883678   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0410 22:53:38.019289   58701 logs.go:123] Gathering logs for coredns [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3] ...
	I0410 22:53:38.019320   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:38.057708   58701 logs.go:123] Gathering logs for kube-scheduler [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14] ...
	I0410 22:53:38.057739   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:38.100119   58701 logs.go:123] Gathering logs for kube-proxy [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b] ...
	I0410 22:53:38.100149   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:38.143845   58701 logs.go:123] Gathering logs for storage-provisioner [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195] ...
	I0410 22:53:38.143875   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:38.186718   58701 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:53:38.186749   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:53:41.168951   58701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:53:41.186828   58701 api_server.go:72] duration metric: took 4m17.343179611s to wait for apiserver process to appear ...
	I0410 22:53:41.186866   58701 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:53:41.186911   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:53:41.186972   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:53:41.228167   58701 cri.go:89] found id: "74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:41.228194   58701 cri.go:89] found id: ""
	I0410 22:53:41.228201   58701 logs.go:276] 1 containers: [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c]
	I0410 22:53:41.228251   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.232754   58701 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:53:41.232812   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:53:41.271497   58701 cri.go:89] found id: "34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:41.271519   58701 cri.go:89] found id: ""
	I0410 22:53:41.271527   58701 logs.go:276] 1 containers: [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072]
	I0410 22:53:41.271575   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.276165   58701 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:53:41.276234   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:53:41.319164   58701 cri.go:89] found id: "d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:41.319187   58701 cri.go:89] found id: ""
	I0410 22:53:41.319195   58701 logs.go:276] 1 containers: [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3]
	I0410 22:53:41.319251   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.323627   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:53:41.323696   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:53:41.366648   58701 cri.go:89] found id: "b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:41.366671   58701 cri.go:89] found id: ""
	I0410 22:53:41.366678   58701 logs.go:276] 1 containers: [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14]
	I0410 22:53:41.366733   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.371132   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:53:41.371197   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:53:41.412956   58701 cri.go:89] found id: "7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:41.412974   58701 cri.go:89] found id: ""
	I0410 22:53:41.412982   58701 logs.go:276] 1 containers: [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b]
	I0410 22:53:41.413034   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.417441   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:53:41.417495   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:53:41.460008   58701 cri.go:89] found id: "c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:41.460037   58701 cri.go:89] found id: ""
	I0410 22:53:41.460048   58701 logs.go:276] 1 containers: [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39]
	I0410 22:53:41.460105   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.464422   58701 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:53:41.464492   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:53:41.504095   58701 cri.go:89] found id: ""
	I0410 22:53:41.504126   58701 logs.go:276] 0 containers: []
	W0410 22:53:41.504134   58701 logs.go:278] No container was found matching "kindnet"
	I0410 22:53:41.504140   58701 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0410 22:53:41.504199   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0410 22:53:41.543443   58701 cri.go:89] found id: "3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:41.543467   58701 cri.go:89] found id: "912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:41.543473   58701 cri.go:89] found id: ""
	I0410 22:53:41.543481   58701 logs.go:276] 2 containers: [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7]
	I0410 22:53:41.543540   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.548182   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.552917   58701 logs.go:123] Gathering logs for kube-proxy [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b] ...
	I0410 22:53:41.552941   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:41.601620   58701 logs.go:123] Gathering logs for kube-controller-manager [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39] ...
	I0410 22:53:41.601652   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:41.653090   58701 logs.go:123] Gathering logs for storage-provisioner [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195] ...
	I0410 22:53:41.653124   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:41.692683   58701 logs.go:123] Gathering logs for storage-provisioner [912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7] ...
	I0410 22:53:41.692711   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:41.736312   58701 logs.go:123] Gathering logs for dmesg ...
	I0410 22:53:41.736353   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:53:41.753242   58701 logs.go:123] Gathering logs for kube-apiserver [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c] ...
	I0410 22:53:41.753283   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:41.812881   58701 logs.go:123] Gathering logs for etcd [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072] ...
	I0410 22:53:41.812910   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:41.860686   58701 logs.go:123] Gathering logs for coredns [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3] ...
	I0410 22:53:41.860714   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:41.902523   58701 logs.go:123] Gathering logs for container status ...
	I0410 22:53:41.902546   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:53:41.945812   58701 logs.go:123] Gathering logs for kubelet ...
	I0410 22:53:41.945848   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:53:42.001012   58701 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:53:42.001046   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0410 22:53:42.123971   58701 logs.go:123] Gathering logs for kube-scheduler [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14] ...
	I0410 22:53:42.124000   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:42.168773   58701 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:53:42.168806   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:53:41.405604   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:43.901172   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:43.595677   58186 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.084634816s)
	I0410 22:53:43.595765   58186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:53:43.613470   58186 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:53:43.624876   58186 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:53:43.638564   58186 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:53:43.638592   58186 kubeadm.go:156] found existing configuration files:
	
	I0410 22:53:43.638641   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:53:43.652554   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:53:43.652608   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:53:43.664263   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:53:43.674443   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:53:43.674497   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:53:43.695444   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:53:43.705446   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:53:43.705518   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:53:43.716451   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:53:43.726343   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:53:43.726407   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:53:43.736859   58186 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 22:53:43.957994   58186 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 22:53:45.115742   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:53:45.120239   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 200:
	ok
	I0410 22:53:45.121662   58701 api_server.go:141] control plane version: v1.29.3
	I0410 22:53:45.121690   58701 api_server.go:131] duration metric: took 3.934815447s to wait for apiserver health ...
	I0410 22:53:45.121699   58701 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:53:45.121727   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:53:45.121780   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:53:45.172291   58701 cri.go:89] found id: "74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:45.172315   58701 cri.go:89] found id: ""
	I0410 22:53:45.172324   58701 logs.go:276] 1 containers: [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c]
	I0410 22:53:45.172382   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.177041   58701 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:53:45.177103   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:53:45.213853   58701 cri.go:89] found id: "34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:45.213880   58701 cri.go:89] found id: ""
	I0410 22:53:45.213889   58701 logs.go:276] 1 containers: [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072]
	I0410 22:53:45.213944   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.218478   58701 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:53:45.218546   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:53:45.268753   58701 cri.go:89] found id: "d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:45.268779   58701 cri.go:89] found id: ""
	I0410 22:53:45.268792   58701 logs.go:276] 1 containers: [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3]
	I0410 22:53:45.268843   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.273223   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:53:45.273291   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:53:45.314032   58701 cri.go:89] found id: "b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:45.314057   58701 cri.go:89] found id: ""
	I0410 22:53:45.314066   58701 logs.go:276] 1 containers: [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14]
	I0410 22:53:45.314115   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.318671   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:53:45.318740   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:53:45.356139   58701 cri.go:89] found id: "7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:45.356167   58701 cri.go:89] found id: ""
	I0410 22:53:45.356177   58701 logs.go:276] 1 containers: [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b]
	I0410 22:53:45.356234   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.361449   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:53:45.361520   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:53:45.405153   58701 cri.go:89] found id: "c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:45.405174   58701 cri.go:89] found id: ""
	I0410 22:53:45.405181   58701 logs.go:276] 1 containers: [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39]
	I0410 22:53:45.405230   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.409795   58701 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:53:45.409871   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:53:45.451984   58701 cri.go:89] found id: ""
	I0410 22:53:45.452016   58701 logs.go:276] 0 containers: []
	W0410 22:53:45.452026   58701 logs.go:278] No container was found matching "kindnet"
	I0410 22:53:45.452034   58701 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0410 22:53:45.452095   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0410 22:53:45.491612   58701 cri.go:89] found id: "3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:45.491650   58701 cri.go:89] found id: "912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:45.491656   58701 cri.go:89] found id: ""
	I0410 22:53:45.491665   58701 logs.go:276] 2 containers: [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7]
	I0410 22:53:45.491724   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.496253   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.500723   58701 logs.go:123] Gathering logs for kube-proxy [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b] ...
	I0410 22:53:45.500751   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:45.557083   58701 logs.go:123] Gathering logs for kube-controller-manager [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39] ...
	I0410 22:53:45.557118   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:45.616768   58701 logs.go:123] Gathering logs for storage-provisioner [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195] ...
	I0410 22:53:45.616804   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:45.664097   58701 logs.go:123] Gathering logs for storage-provisioner [912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7] ...
	I0410 22:53:45.664133   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:45.707920   58701 logs.go:123] Gathering logs for container status ...
	I0410 22:53:45.707957   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:53:45.751862   58701 logs.go:123] Gathering logs for kube-apiserver [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c] ...
	I0410 22:53:45.751898   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:45.806584   58701 logs.go:123] Gathering logs for coredns [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3] ...
	I0410 22:53:45.806619   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:45.846145   58701 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:53:45.846170   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0410 22:53:45.970766   58701 logs.go:123] Gathering logs for etcd [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072] ...
	I0410 22:53:45.970796   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:46.024049   58701 logs.go:123] Gathering logs for kube-scheduler [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14] ...
	I0410 22:53:46.024081   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:46.067009   58701 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:53:46.067048   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:53:46.462765   58701 logs.go:123] Gathering logs for kubelet ...
	I0410 22:53:46.462812   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:53:46.520007   58701 logs.go:123] Gathering logs for dmesg ...
	I0410 22:53:46.520049   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:53:49.047137   58701 system_pods.go:59] 8 kube-system pods found
	I0410 22:53:49.047166   58701 system_pods.go:61] "coredns-76f75df574-ghnvx" [88ebd9b0-ecf0-4037-b5b0-547dad2354ba] Running
	I0410 22:53:49.047170   58701 system_pods.go:61] "etcd-default-k8s-diff-port-519831" [e03c3075-b377-41a8-8f68-5a424fafd6a1] Running
	I0410 22:53:49.047174   58701 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-519831" [a538137b-08ce-4feb-a420-2e3ad7125b14] Running
	I0410 22:53:49.047177   58701 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-519831" [05d154e2-69cf-40dc-a4ba-ed0d65be4365] Running
	I0410 22:53:49.047180   58701 system_pods.go:61] "kube-proxy-5mbwx" [44724487-9539-4079-9fd6-40cb70208b95] Running
	I0410 22:53:49.047183   58701 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-519831" [a587ef20-140c-40b0-b306-d5f5c595f4a6] Running
	I0410 22:53:49.047189   58701 system_pods.go:61] "metrics-server-57f55c9bc5-9l2hc" [2f5cda2f-4d8f-4798-954e-5ef588f2b88f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:53:49.047192   58701 system_pods.go:61] "storage-provisioner" [e4e09f42-54ba-480e-a020-1ca071a54558] Running
	I0410 22:53:49.047201   58701 system_pods.go:74] duration metric: took 3.925495812s to wait for pod list to return data ...
	I0410 22:53:49.047208   58701 default_sa.go:34] waiting for default service account to be created ...
	I0410 22:53:49.050341   58701 default_sa.go:45] found service account: "default"
	I0410 22:53:49.050363   58701 default_sa.go:55] duration metric: took 3.148222ms for default service account to be created ...
	I0410 22:53:49.050371   58701 system_pods.go:116] waiting for k8s-apps to be running ...
	I0410 22:53:49.056364   58701 system_pods.go:86] 8 kube-system pods found
	I0410 22:53:49.056390   58701 system_pods.go:89] "coredns-76f75df574-ghnvx" [88ebd9b0-ecf0-4037-b5b0-547dad2354ba] Running
	I0410 22:53:49.056414   58701 system_pods.go:89] "etcd-default-k8s-diff-port-519831" [e03c3075-b377-41a8-8f68-5a424fafd6a1] Running
	I0410 22:53:49.056423   58701 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-519831" [a538137b-08ce-4feb-a420-2e3ad7125b14] Running
	I0410 22:53:49.056431   58701 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-519831" [05d154e2-69cf-40dc-a4ba-ed0d65be4365] Running
	I0410 22:53:49.056437   58701 system_pods.go:89] "kube-proxy-5mbwx" [44724487-9539-4079-9fd6-40cb70208b95] Running
	I0410 22:53:49.056444   58701 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-519831" [a587ef20-140c-40b0-b306-d5f5c595f4a6] Running
	I0410 22:53:49.056455   58701 system_pods.go:89] "metrics-server-57f55c9bc5-9l2hc" [2f5cda2f-4d8f-4798-954e-5ef588f2b88f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:53:49.056462   58701 system_pods.go:89] "storage-provisioner" [e4e09f42-54ba-480e-a020-1ca071a54558] Running
	I0410 22:53:49.056475   58701 system_pods.go:126] duration metric: took 6.097239ms to wait for k8s-apps to be running ...
	I0410 22:53:49.056492   58701 system_svc.go:44] waiting for kubelet service to be running ....
	I0410 22:53:49.056537   58701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:53:49.077239   58701 system_svc.go:56] duration metric: took 20.737127ms WaitForService to wait for kubelet
	I0410 22:53:49.077269   58701 kubeadm.go:576] duration metric: took 4m25.233626302s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 22:53:49.077297   58701 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:53:49.080463   58701 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:53:49.080486   58701 node_conditions.go:123] node cpu capacity is 2
	I0410 22:53:49.080497   58701 node_conditions.go:105] duration metric: took 3.195662ms to run NodePressure ...
	I0410 22:53:49.080508   58701 start.go:240] waiting for startup goroutines ...
	I0410 22:53:49.080515   58701 start.go:245] waiting for cluster config update ...
	I0410 22:53:49.080525   58701 start.go:254] writing updated cluster config ...
	I0410 22:53:49.080805   58701 ssh_runner.go:195] Run: rm -f paused
	I0410 22:53:49.141489   58701 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0410 22:53:49.143597   58701 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-519831" cluster and "default" namespace by default
	I0410 22:53:45.903632   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:48.403981   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:53.064071   58186 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0410 22:53:53.064154   58186 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 22:53:53.064260   58186 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 22:53:53.064429   58186 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 22:53:53.064574   58186 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 22:53:53.064670   58186 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 22:53:53.066595   58186 out.go:204]   - Generating certificates and keys ...
	I0410 22:53:53.066703   58186 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 22:53:53.066808   58186 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 22:53:53.066929   58186 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0410 22:53:53.067023   58186 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0410 22:53:53.067155   58186 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0410 22:53:53.067235   58186 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0410 22:53:53.067329   58186 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0410 22:53:53.067433   58186 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0410 22:53:53.067546   58186 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0410 22:53:53.067655   58186 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0410 22:53:53.067733   58186 kubeadm.go:309] [certs] Using the existing "sa" key
	I0410 22:53:53.067890   58186 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 22:53:53.067961   58186 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 22:53:53.068049   58186 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0410 22:53:53.068132   58186 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 22:53:53.068232   58186 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 22:53:53.068310   58186 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 22:53:53.068379   58186 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 22:53:53.068510   58186 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 22:53:53.070126   58186 out.go:204]   - Booting up control plane ...
	I0410 22:53:53.070219   58186 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 22:53:53.070324   58186 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 22:53:53.070425   58186 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 22:53:53.070565   58186 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 22:53:53.070686   58186 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 22:53:53.070748   58186 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 22:53:53.070973   58186 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0410 22:53:53.071083   58186 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002820 seconds
	I0410 22:53:53.071249   58186 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0410 22:53:53.071424   58186 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0410 22:53:53.071485   58186 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0410 22:53:53.071624   58186 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-706500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0410 22:53:53.071680   58186 kubeadm.go:309] [bootstrap-token] Using token: 0wvld6.jntz9ft9bn5g46le
	I0410 22:53:53.073567   58186 out.go:204]   - Configuring RBAC rules ...
	I0410 22:53:53.073708   58186 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0410 22:53:53.073819   58186 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0410 22:53:53.074015   58186 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0410 22:53:53.074206   58186 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0410 22:53:53.074370   58186 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0410 22:53:53.074548   58186 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0410 22:53:53.074726   58186 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0410 22:53:53.074798   58186 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0410 22:53:53.074873   58186 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0410 22:53:53.074884   58186 kubeadm.go:309] 
	I0410 22:53:53.074956   58186 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0410 22:53:53.074978   58186 kubeadm.go:309] 
	I0410 22:53:53.075077   58186 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0410 22:53:53.075088   58186 kubeadm.go:309] 
	I0410 22:53:53.075119   58186 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0410 22:53:53.075191   58186 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0410 22:53:53.075262   58186 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0410 22:53:53.075273   58186 kubeadm.go:309] 
	I0410 22:53:53.075337   58186 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0410 22:53:53.075353   58186 kubeadm.go:309] 
	I0410 22:53:53.075419   58186 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0410 22:53:53.075437   58186 kubeadm.go:309] 
	I0410 22:53:53.075503   58186 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0410 22:53:53.075621   58186 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0410 22:53:53.075714   58186 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0410 22:53:53.075724   58186 kubeadm.go:309] 
	I0410 22:53:53.075829   58186 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0410 22:53:53.075936   58186 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0410 22:53:53.075953   58186 kubeadm.go:309] 
	I0410 22:53:53.076058   58186 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 0wvld6.jntz9ft9bn5g46le \
	I0410 22:53:53.076196   58186 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f23510b5ac28f8f42919becc6f4945068a05a3fcf79470ca514b0af3de7a2bb0 \
	I0410 22:53:53.076253   58186 kubeadm.go:309] 	--control-plane 
	I0410 22:53:53.076270   58186 kubeadm.go:309] 
	I0410 22:53:53.076387   58186 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0410 22:53:53.076422   58186 kubeadm.go:309] 
	I0410 22:53:53.076516   58186 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 0wvld6.jntz9ft9bn5g46le \
	I0410 22:53:53.076661   58186 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f23510b5ac28f8f42919becc6f4945068a05a3fcf79470ca514b0af3de7a2bb0 
	I0410 22:53:53.076711   58186 cni.go:84] Creating CNI manager for ""
	I0410 22:53:53.076726   58186 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:53:53.078503   58186 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0410 22:53:50.902397   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:53.403449   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:53.079631   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0410 22:53:53.132043   58186 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0410 22:53:53.167760   58186 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0410 22:53:53.167847   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:53.167870   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-706500 minikube.k8s.io/updated_at=2024_04_10T22_53_53_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2 minikube.k8s.io/name=embed-certs-706500 minikube.k8s.io/primary=true
	I0410 22:53:53.511359   58186 ops.go:34] apiserver oom_adj: -16
	I0410 22:53:53.511506   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:54.012080   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:54.511816   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:55.011883   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:55.511809   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:56.011572   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:56.512114   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:57.011878   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:55.900548   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:57.901541   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:57.662444   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:53:57.662687   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:53:57.511726   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:58.011563   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:58.512617   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:59.012145   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:59.512448   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:00.012278   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:00.512290   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:01.012507   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:01.512415   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:02.011660   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:00.401622   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:54:02.902558   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:54:02.511581   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:03.012326   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:03.512539   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:04.012085   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:04.512496   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:05.011911   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:05.512180   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:05.619801   58186 kubeadm.go:1107] duration metric: took 12.452015223s to wait for elevateKubeSystemPrivileges
	W0410 22:54:05.619839   58186 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0410 22:54:05.619847   58186 kubeadm.go:393] duration metric: took 5m12.640298551s to StartCluster
	I0410 22:54:05.619862   58186 settings.go:142] acquiring lock: {Name:mk5dc8e9a07a91433645b19ffba859d70c73be71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:54:05.619936   58186 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:54:05.621989   58186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/kubeconfig: {Name:mkd6f498bec10d7b0d2291f83f5e27766227bc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:54:05.622331   58186 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0410 22:54:05.624233   58186 out.go:177] * Verifying Kubernetes components...
	I0410 22:54:05.622444   58186 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0410 22:54:05.622516   58186 config.go:182] Loaded profile config "embed-certs-706500": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:54:05.625850   58186 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-706500"
	I0410 22:54:05.625872   58186 addons.go:69] Setting default-storageclass=true in profile "embed-certs-706500"
	I0410 22:54:05.625882   58186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:54:05.625893   58186 addons.go:69] Setting metrics-server=true in profile "embed-certs-706500"
	I0410 22:54:05.625924   58186 addons.go:234] Setting addon metrics-server=true in "embed-certs-706500"
	W0410 22:54:05.625930   58186 addons.go:243] addon metrics-server should already be in state true
	I0410 22:54:05.625954   58186 host.go:66] Checking if "embed-certs-706500" exists ...
	I0410 22:54:05.625888   58186 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-706500"
	I0410 22:54:05.625903   58186 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-706500"
	W0410 22:54:05.625982   58186 addons.go:243] addon storage-provisioner should already be in state true
	I0410 22:54:05.626012   58186 host.go:66] Checking if "embed-certs-706500" exists ...
	I0410 22:54:05.626365   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.626407   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.626421   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.626440   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.626441   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.626442   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.643647   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44787
	I0410 22:54:05.643758   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41863
	I0410 22:54:05.644070   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45225
	I0410 22:54:05.644101   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.644253   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.644825   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.644856   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.644825   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.644883   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.644915   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.645239   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.645419   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetState
	I0410 22:54:05.645475   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.645489   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.645501   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.646021   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.646035   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.646062   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.646588   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.646619   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.648242   58186 addons.go:234] Setting addon default-storageclass=true in "embed-certs-706500"
	W0410 22:54:05.648261   58186 addons.go:243] addon default-storageclass should already be in state true
	I0410 22:54:05.648282   58186 host.go:66] Checking if "embed-certs-706500" exists ...
	I0410 22:54:05.648555   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.648582   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.661773   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37117
	I0410 22:54:05.662556   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.663049   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.663073   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.663474   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.663708   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetState
	I0410 22:54:05.664716   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44297
	I0410 22:54:05.665027   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.665617   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.665634   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.665706   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46269
	I0410 22:54:05.666342   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.666343   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:54:05.665946   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.668790   58186 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:54:05.667015   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.667244   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.670336   58186 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:54:05.670357   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0410 22:54:05.670374   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:54:05.668826   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.668843   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.671350   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.671633   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetState
	I0410 22:54:05.673653   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:54:05.675310   58186 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0410 22:54:05.674011   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.674533   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:54:05.676671   58186 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0410 22:54:05.676677   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:54:05.676690   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0410 22:54:05.676710   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:54:05.676713   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.676821   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:54:05.676976   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:54:05.677117   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:54:05.680146   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.680927   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:54:05.680964   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.681136   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:54:05.681515   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:54:05.681681   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:54:05.681834   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:54:05.688424   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43155
	I0410 22:54:05.688861   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.689299   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.689320   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.689589   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.689741   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetState
	I0410 22:54:05.691090   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:54:05.691335   58186 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0410 22:54:05.691353   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0410 22:54:05.691369   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:54:05.694552   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.695080   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:54:05.695118   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.695426   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:54:05.695771   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:54:05.695939   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:54:05.696084   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:54:05.860032   58186 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:54:05.881036   58186 node_ready.go:35] waiting up to 6m0s for node "embed-certs-706500" to be "Ready" ...
	I0410 22:54:05.891218   58186 node_ready.go:49] node "embed-certs-706500" has status "Ready":"True"
	I0410 22:54:05.891237   58186 node_ready.go:38] duration metric: took 10.166143ms for node "embed-certs-706500" to be "Ready" ...
	I0410 22:54:05.891247   58186 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:54:05.899013   58186 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-bvdp5" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:06.064031   58186 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0410 22:54:06.064051   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0410 22:54:06.065727   58186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:54:06.075127   58186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0410 22:54:06.140574   58186 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0410 22:54:06.140607   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0410 22:54:06.216389   58186 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:54:06.216428   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0410 22:54:06.356117   58186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:54:07.409983   58186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.334826611s)
	I0410 22:54:07.410039   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.410052   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.410103   58186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.344342448s)
	I0410 22:54:07.410184   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.410199   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.410313   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Closing plugin on server side
	I0410 22:54:07.410321   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.410362   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.410371   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.410382   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.410452   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.410505   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.410519   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.410531   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.410465   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Closing plugin on server side
	I0410 22:54:07.410678   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.410765   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.410802   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.410820   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.410822   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Closing plugin on server side
	I0410 22:54:07.438723   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.438742   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.439085   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.439104   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.439085   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Closing plugin on server side
	I0410 22:54:07.738187   58186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.382031326s)
	I0410 22:54:07.738252   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.738267   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.738556   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.738586   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.738597   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.738604   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.738865   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.738885   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.738908   58186 addons.go:470] Verifying addon metrics-server=true in "embed-certs-706500"
	I0410 22:54:07.741639   58186 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0410 22:54:05.403374   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:54:07.903041   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:54:08.895154   57270 pod_ready.go:81] duration metric: took 4m0.000708165s for pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace to be "Ready" ...
	E0410 22:54:08.895186   57270 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace to be "Ready" (will not retry!)
	I0410 22:54:08.895214   57270 pod_ready.go:38] duration metric: took 4m14.550044852s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:54:08.895246   57270 kubeadm.go:591] duration metric: took 4m22.444968141s to restartPrimaryControlPlane
	W0410 22:54:08.895308   57270 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0410 22:54:08.895339   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0410 22:54:07.742954   58186 addons.go:505] duration metric: took 2.120520274s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0410 22:54:07.910203   58186 pod_ready.go:102] pod "coredns-76f75df574-bvdp5" in "kube-system" namespace has status "Ready":"False"
	I0410 22:54:08.906369   58186 pod_ready.go:92] pod "coredns-76f75df574-bvdp5" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:08.906394   58186 pod_ready.go:81] duration metric: took 3.007348288s for pod "coredns-76f75df574-bvdp5" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.906407   58186 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-v2pp5" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.913564   58186 pod_ready.go:92] pod "coredns-76f75df574-v2pp5" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:08.913582   58186 pod_ready.go:81] duration metric: took 7.168463ms for pod "coredns-76f75df574-v2pp5" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.913592   58186 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.919270   58186 pod_ready.go:92] pod "etcd-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:08.919296   58186 pod_ready.go:81] duration metric: took 5.696297ms for pod "etcd-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.919308   58186 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.924389   58186 pod_ready.go:92] pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:08.924430   58186 pod_ready.go:81] duration metric: took 5.111624ms for pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.924443   58186 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.929296   58186 pod_ready.go:92] pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:08.929320   58186 pod_ready.go:81] duration metric: took 4.869073ms for pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.929333   58186 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xj5nq" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:09.305730   58186 pod_ready.go:92] pod "kube-proxy-xj5nq" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:09.305756   58186 pod_ready.go:81] duration metric: took 376.415901ms for pod "kube-proxy-xj5nq" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:09.305770   58186 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:09.703841   58186 pod_ready.go:92] pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:09.703869   58186 pod_ready.go:81] duration metric: took 398.090582ms for pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:09.703881   58186 pod_ready.go:38] duration metric: took 3.812625835s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:54:09.703898   58186 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:54:09.703957   58186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:54:09.720728   58186 api_server.go:72] duration metric: took 4.098354983s to wait for apiserver process to appear ...
	I0410 22:54:09.720763   58186 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:54:09.720786   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:54:09.726522   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I0410 22:54:09.727951   58186 api_server.go:141] control plane version: v1.29.3
	I0410 22:54:09.727979   58186 api_server.go:131] duration metric: took 7.20731ms to wait for apiserver health ...
	I0410 22:54:09.727989   58186 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:54:09.908166   58186 system_pods.go:59] 9 kube-system pods found
	I0410 22:54:09.908203   58186 system_pods.go:61] "coredns-76f75df574-bvdp5" [1cc8a326-77ef-469f-abf7-082ff8a44782] Running
	I0410 22:54:09.908212   58186 system_pods.go:61] "coredns-76f75df574-v2pp5" [2138fb5e-9c16-4a25-85d3-3d84b361a1e8] Running
	I0410 22:54:09.908217   58186 system_pods.go:61] "etcd-embed-certs-706500" [4a4b25f6-f8b7-49a2-9dfb-74d480775de7] Running
	I0410 22:54:09.908222   58186 system_pods.go:61] "kube-apiserver-embed-certs-706500" [33bf3126-e3fa-49f8-829d-8fb5ab407062] Running
	I0410 22:54:09.908227   58186 system_pods.go:61] "kube-controller-manager-embed-certs-706500" [97ca8487-eb31-43f8-ab20-873a134bdcad] Running
	I0410 22:54:09.908232   58186 system_pods.go:61] "kube-proxy-xj5nq" [c1bb1878-3e4b-4647-a3a7-cb327ccbd364] Running
	I0410 22:54:09.908236   58186 system_pods.go:61] "kube-scheduler-embed-certs-706500" [977f178e-11a1-46a9-87a1-04a5a915c267] Running
	I0410 22:54:09.908246   58186 system_pods.go:61] "metrics-server-57f55c9bc5-9mrmz" [a4ccd29a-d27e-4291-ac8c-3135d65f8a2a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:54:09.908251   58186 system_pods.go:61] "storage-provisioner" [8ad8e533-69ca-4eb5-9595-e6808dc0ff1a] Running
	I0410 22:54:09.908263   58186 system_pods.go:74] duration metric: took 180.267138ms to wait for pod list to return data ...
	I0410 22:54:09.908276   58186 default_sa.go:34] waiting for default service account to be created ...
	I0410 22:54:10.103556   58186 default_sa.go:45] found service account: "default"
	I0410 22:54:10.103586   58186 default_sa.go:55] duration metric: took 195.301798ms for default service account to be created ...
	I0410 22:54:10.103597   58186 system_pods.go:116] waiting for k8s-apps to be running ...
	I0410 22:54:10.309537   58186 system_pods.go:86] 9 kube-system pods found
	I0410 22:54:10.309566   58186 system_pods.go:89] "coredns-76f75df574-bvdp5" [1cc8a326-77ef-469f-abf7-082ff8a44782] Running
	I0410 22:54:10.309572   58186 system_pods.go:89] "coredns-76f75df574-v2pp5" [2138fb5e-9c16-4a25-85d3-3d84b361a1e8] Running
	I0410 22:54:10.309578   58186 system_pods.go:89] "etcd-embed-certs-706500" [4a4b25f6-f8b7-49a2-9dfb-74d480775de7] Running
	I0410 22:54:10.309583   58186 system_pods.go:89] "kube-apiserver-embed-certs-706500" [33bf3126-e3fa-49f8-829d-8fb5ab407062] Running
	I0410 22:54:10.309588   58186 system_pods.go:89] "kube-controller-manager-embed-certs-706500" [97ca8487-eb31-43f8-ab20-873a134bdcad] Running
	I0410 22:54:10.309592   58186 system_pods.go:89] "kube-proxy-xj5nq" [c1bb1878-3e4b-4647-a3a7-cb327ccbd364] Running
	I0410 22:54:10.309596   58186 system_pods.go:89] "kube-scheduler-embed-certs-706500" [977f178e-11a1-46a9-87a1-04a5a915c267] Running
	I0410 22:54:10.309602   58186 system_pods.go:89] "metrics-server-57f55c9bc5-9mrmz" [a4ccd29a-d27e-4291-ac8c-3135d65f8a2a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:54:10.309607   58186 system_pods.go:89] "storage-provisioner" [8ad8e533-69ca-4eb5-9595-e6808dc0ff1a] Running
	I0410 22:54:10.309617   58186 system_pods.go:126] duration metric: took 206.014442ms to wait for k8s-apps to be running ...
	I0410 22:54:10.309624   58186 system_svc.go:44] waiting for kubelet service to be running ....
	I0410 22:54:10.309666   58186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:54:10.324614   58186 system_svc.go:56] duration metric: took 14.97975ms WaitForService to wait for kubelet
	I0410 22:54:10.324651   58186 kubeadm.go:576] duration metric: took 4.702277594s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 22:54:10.324669   58186 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:54:10.503911   58186 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:54:10.503939   58186 node_conditions.go:123] node cpu capacity is 2
	I0410 22:54:10.503949   58186 node_conditions.go:105] duration metric: took 179.27538ms to run NodePressure ...
	I0410 22:54:10.503959   58186 start.go:240] waiting for startup goroutines ...
	I0410 22:54:10.503966   58186 start.go:245] waiting for cluster config update ...
	I0410 22:54:10.503975   58186 start.go:254] writing updated cluster config ...
	I0410 22:54:10.504242   58186 ssh_runner.go:195] Run: rm -f paused
	I0410 22:54:10.555500   58186 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0410 22:54:10.557941   58186 out.go:177] * Done! kubectl is now configured to use "embed-certs-706500" cluster and "default" namespace by default
	I0410 22:54:37.664290   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:54:37.664604   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:54:37.664634   57719 kubeadm.go:309] 
	I0410 22:54:37.664776   57719 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0410 22:54:37.664843   57719 kubeadm.go:309] 		timed out waiting for the condition
	I0410 22:54:37.664854   57719 kubeadm.go:309] 
	I0410 22:54:37.664901   57719 kubeadm.go:309] 	This error is likely caused by:
	I0410 22:54:37.664968   57719 kubeadm.go:309] 		- The kubelet is not running
	I0410 22:54:37.665086   57719 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0410 22:54:37.665101   57719 kubeadm.go:309] 
	I0410 22:54:37.665245   57719 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0410 22:54:37.665313   57719 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0410 22:54:37.665360   57719 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0410 22:54:37.665372   57719 kubeadm.go:309] 
	I0410 22:54:37.665579   57719 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0410 22:54:37.665695   57719 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0410 22:54:37.665707   57719 kubeadm.go:309] 
	I0410 22:54:37.665868   57719 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0410 22:54:37.666063   57719 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0410 22:54:37.666192   57719 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0410 22:54:37.666272   57719 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0410 22:54:37.666284   57719 kubeadm.go:309] 
	I0410 22:54:37.667202   57719 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 22:54:37.667329   57719 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0410 22:54:37.667420   57719 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0410 22:54:37.667555   57719 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0410 22:54:37.667623   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0410 22:54:40.975782   57270 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.080419546s)
	I0410 22:54:40.975854   57270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:54:40.993677   57270 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:54:41.006185   57270 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:54:41.016820   57270 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:54:41.016850   57270 kubeadm.go:156] found existing configuration files:
	
	I0410 22:54:41.016985   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:54:41.026802   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:54:41.026871   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:54:41.036992   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:54:41.046896   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:54:41.046962   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:54:41.057184   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:54:41.067261   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:54:41.067321   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:54:41.077846   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:54:41.087745   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:54:41.087795   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:54:41.098660   57270 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 22:54:41.159736   57270 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-rc.1
	I0410 22:54:41.159807   57270 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 22:54:41.316137   57270 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 22:54:41.316279   57270 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 22:54:41.316446   57270 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 22:54:41.559720   57270 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 22:54:41.561946   57270 out.go:204]   - Generating certificates and keys ...
	I0410 22:54:41.562039   57270 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 22:54:41.562141   57270 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 22:54:41.562211   57270 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0410 22:54:41.562275   57270 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0410 22:54:41.562352   57270 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0410 22:54:41.562460   57270 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0410 22:54:41.562572   57270 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0410 22:54:41.562667   57270 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0410 22:54:41.562803   57270 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0410 22:54:41.562917   57270 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0410 22:54:41.562992   57270 kubeadm.go:309] [certs] Using the existing "sa" key
	I0410 22:54:41.563081   57270 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 22:54:41.723729   57270 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 22:54:41.834274   57270 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0410 22:54:41.936758   57270 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 22:54:42.038298   57270 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 22:54:42.229459   57270 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 22:54:42.230047   57270 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 22:54:42.233021   57270 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 22:54:42.236068   57270 out.go:204]   - Booting up control plane ...
	I0410 22:54:42.236197   57270 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 22:54:42.236303   57270 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 22:54:42.236421   57270 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 22:54:42.255487   57270 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 22:54:42.256345   57270 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 22:54:42.256450   57270 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 22:54:42.391623   57270 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0410 22:54:42.391736   57270 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0410 22:54:43.393825   57270 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.00265832s
	I0410 22:54:43.393973   57270 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0410 22:54:43.156141   57719 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.488487447s)
	I0410 22:54:43.156227   57719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:54:43.170709   57719 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:54:43.180624   57719 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:54:43.180647   57719 kubeadm.go:156] found existing configuration files:
	
	I0410 22:54:43.180701   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:54:43.190482   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:54:43.190533   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:54:43.200261   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:54:43.210061   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:54:43.210116   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:54:43.220430   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:54:43.230810   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:54:43.230877   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:54:43.241141   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:54:43.251043   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:54:43.251111   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:54:43.261163   57719 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 22:54:43.534002   57719 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 22:54:48.398196   57270 kubeadm.go:309] [api-check] The API server is healthy after 5.002218646s
	I0410 22:54:48.410618   57270 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0410 22:54:48.430553   57270 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0410 22:54:48.465343   57270 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0410 22:54:48.465614   57270 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-646133 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0410 22:54:48.489066   57270 kubeadm.go:309] [bootstrap-token] Using token: 14xwwp.uyth37qsjfn0mpcx
	I0410 22:54:48.490984   57270 out.go:204]   - Configuring RBAC rules ...
	I0410 22:54:48.491116   57270 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0410 22:54:48.502789   57270 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0410 22:54:48.516871   57270 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0410 22:54:48.523600   57270 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0410 22:54:48.527939   57270 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0410 22:54:48.537216   57270 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0410 22:54:48.806350   57270 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0410 22:54:49.234618   57270 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0410 22:54:49.803640   57270 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0410 22:54:49.804948   57270 kubeadm.go:309] 
	I0410 22:54:49.805074   57270 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0410 22:54:49.805095   57270 kubeadm.go:309] 
	I0410 22:54:49.805194   57270 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0410 22:54:49.805209   57270 kubeadm.go:309] 
	I0410 22:54:49.805240   57270 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0410 22:54:49.805323   57270 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0410 22:54:49.805403   57270 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0410 22:54:49.805415   57270 kubeadm.go:309] 
	I0410 22:54:49.805482   57270 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0410 22:54:49.805489   57270 kubeadm.go:309] 
	I0410 22:54:49.805562   57270 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0410 22:54:49.805580   57270 kubeadm.go:309] 
	I0410 22:54:49.805646   57270 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0410 22:54:49.805781   57270 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0410 22:54:49.805888   57270 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0410 22:54:49.805901   57270 kubeadm.go:309] 
	I0410 22:54:49.806038   57270 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0410 22:54:49.806143   57270 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0410 22:54:49.806154   57270 kubeadm.go:309] 
	I0410 22:54:49.806262   57270 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 14xwwp.uyth37qsjfn0mpcx \
	I0410 22:54:49.806398   57270 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f23510b5ac28f8f42919becc6f4945068a05a3fcf79470ca514b0af3de7a2bb0 \
	I0410 22:54:49.806438   57270 kubeadm.go:309] 	--control-plane 
	I0410 22:54:49.806456   57270 kubeadm.go:309] 
	I0410 22:54:49.806565   57270 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0410 22:54:49.806581   57270 kubeadm.go:309] 
	I0410 22:54:49.806661   57270 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 14xwwp.uyth37qsjfn0mpcx \
	I0410 22:54:49.806777   57270 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f23510b5ac28f8f42919becc6f4945068a05a3fcf79470ca514b0af3de7a2bb0 
	I0410 22:54:49.808385   57270 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 22:54:49.808455   57270 cni.go:84] Creating CNI manager for ""
	I0410 22:54:49.808473   57270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:54:49.811276   57270 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0410 22:54:49.812840   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0410 22:54:49.829865   57270 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0410 22:54:49.854383   57270 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0410 22:54:49.854454   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:49.854456   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-646133 minikube.k8s.io/updated_at=2024_04_10T22_54_49_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2 minikube.k8s.io/name=no-preload-646133 minikube.k8s.io/primary=true
	I0410 22:54:49.888254   57270 ops.go:34] apiserver oom_adj: -16
	I0410 22:54:50.073922   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:50.574248   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:51.074134   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:51.574654   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:52.074970   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:52.574248   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:53.074799   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:53.574902   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:54.074695   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:54.574038   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:55.074975   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:55.574297   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:56.074490   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:56.574490   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:57.074280   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:57.574569   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:58.074654   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:58.574740   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:59.074630   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:59.574546   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:00.075044   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:00.574740   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:01.074961   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:01.574004   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:02.074121   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:02.574476   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:02.705604   57270 kubeadm.go:1107] duration metric: took 12.851213125s to wait for elevateKubeSystemPrivileges
	W0410 22:55:02.705636   57270 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0410 22:55:02.705644   57270 kubeadm.go:393] duration metric: took 5m16.306442396s to StartCluster
	I0410 22:55:02.705660   57270 settings.go:142] acquiring lock: {Name:mk5dc8e9a07a91433645b19ffba859d70c73be71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:55:02.705739   57270 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:55:02.707592   57270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/kubeconfig: {Name:mkd6f498bec10d7b0d2291f83f5e27766227bc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:55:02.707844   57270 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.17 Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0410 22:55:02.709479   57270 out.go:177] * Verifying Kubernetes components...
	I0410 22:55:02.707944   57270 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0410 22:55:02.708074   57270 config.go:182] Loaded profile config "no-preload-646133": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.1
	I0410 22:55:02.710816   57270 addons.go:69] Setting storage-provisioner=true in profile "no-preload-646133"
	I0410 22:55:02.710827   57270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:55:02.710854   57270 addons.go:234] Setting addon storage-provisioner=true in "no-preload-646133"
	W0410 22:55:02.710865   57270 addons.go:243] addon storage-provisioner should already be in state true
	I0410 22:55:02.710889   57270 host.go:66] Checking if "no-preload-646133" exists ...
	I0410 22:55:02.710819   57270 addons.go:69] Setting default-storageclass=true in profile "no-preload-646133"
	I0410 22:55:02.710975   57270 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-646133"
	I0410 22:55:02.710821   57270 addons.go:69] Setting metrics-server=true in profile "no-preload-646133"
	I0410 22:55:02.711079   57270 addons.go:234] Setting addon metrics-server=true in "no-preload-646133"
	W0410 22:55:02.711090   57270 addons.go:243] addon metrics-server should already be in state true
	I0410 22:55:02.711119   57270 host.go:66] Checking if "no-preload-646133" exists ...
	I0410 22:55:02.711325   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.711349   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.711352   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.711382   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.711486   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.711507   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.729696   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44631
	I0410 22:55:02.730179   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.730725   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.730751   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.731138   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35903
	I0410 22:55:02.731161   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43589
	I0410 22:55:02.731223   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.731532   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.731551   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.731920   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.731951   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.732083   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.732103   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.732266   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.732290   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.732642   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.732692   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.732892   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetState
	I0410 22:55:02.733291   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.733336   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.737245   57270 addons.go:234] Setting addon default-storageclass=true in "no-preload-646133"
	W0410 22:55:02.737274   57270 addons.go:243] addon default-storageclass should already be in state true
	I0410 22:55:02.737304   57270 host.go:66] Checking if "no-preload-646133" exists ...
	I0410 22:55:02.737674   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.737710   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.749656   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40775
	I0410 22:55:02.750133   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.751030   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.751054   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.751467   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.751642   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetState
	I0410 22:55:02.752548   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37217
	I0410 22:55:02.753119   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.753727   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:55:02.753903   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.753918   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.755963   57270 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0410 22:55:02.754443   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.757499   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42427
	I0410 22:55:02.757548   57270 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0410 22:55:02.757559   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0410 22:55:02.757576   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:55:02.757684   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetState
	I0410 22:55:02.758428   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.758880   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.758893   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.759783   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.760197   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.760224   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.760379   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:55:02.762291   57270 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:55:02.761210   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.761741   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:55:02.763819   57270 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:55:02.763907   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0410 22:55:02.763918   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:55:02.763841   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:55:02.763963   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.764040   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:55:02.764153   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:55:02.764239   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:55:02.767729   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.767758   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:55:02.767776   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.767730   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:55:02.767951   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:55:02.768100   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:55:02.768223   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:55:02.782788   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39421
	I0410 22:55:02.783161   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.783701   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.783726   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.784081   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.784347   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetState
	I0410 22:55:02.785932   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:55:02.786186   57270 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0410 22:55:02.786200   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0410 22:55:02.786217   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:55:02.789193   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.789526   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:55:02.789576   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.789837   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:55:02.790096   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:55:02.790278   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:55:02.790431   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:55:02.922239   57270 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:55:02.957665   57270 node_ready.go:35] waiting up to 6m0s for node "no-preload-646133" to be "Ready" ...
	I0410 22:55:02.981427   57270 node_ready.go:49] node "no-preload-646133" has status "Ready":"True"
	I0410 22:55:02.981449   57270 node_ready.go:38] duration metric: took 23.75134ms for node "no-preload-646133" to be "Ready" ...
	I0410 22:55:02.981458   57270 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:55:02.986557   57270 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jm2zw" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:03.024992   57270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:55:03.032744   57270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0410 22:55:03.156968   57270 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0410 22:55:03.156989   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0410 22:55:03.237497   57270 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0410 22:55:03.237522   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0410 22:55:03.274982   57270 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:55:03.275005   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0410 22:55:03.317464   57270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:55:03.512107   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.512130   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.512173   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.512198   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.512435   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.512455   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.512525   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.512530   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.512541   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.512542   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.512538   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.512551   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.512558   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.512497   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.512782   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.512799   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.512876   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.512915   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.512878   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.525688   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.525707   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.526017   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.526042   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.526057   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.905597   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.905627   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.906016   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.906081   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.906089   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.906101   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.906107   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.906353   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.906355   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.906381   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.906392   57270 addons.go:470] Verifying addon metrics-server=true in "no-preload-646133"
	I0410 22:55:03.908467   57270 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0410 22:55:03.910238   57270 addons.go:505] duration metric: took 1.20230017s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0410 22:55:05.035855   57270 pod_ready.go:102] pod "coredns-7db6d8ff4d-jm2zw" in "kube-system" namespace has status "Ready":"False"
	I0410 22:55:05.493330   57270 pod_ready.go:92] pod "coredns-7db6d8ff4d-jm2zw" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.493354   57270 pod_ready.go:81] duration metric: took 2.506773848s for pod "coredns-7db6d8ff4d-jm2zw" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.493365   57270 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-v599p" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.498568   57270 pod_ready.go:92] pod "coredns-7db6d8ff4d-v599p" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.498593   57270 pod_ready.go:81] duration metric: took 5.220548ms for pod "coredns-7db6d8ff4d-v599p" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.498604   57270 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.505133   57270 pod_ready.go:92] pod "etcd-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.505156   57270 pod_ready.go:81] duration metric: took 6.544104ms for pod "etcd-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.505165   57270 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.510391   57270 pod_ready.go:92] pod "kube-apiserver-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.510415   57270 pod_ready.go:81] duration metric: took 5.2417ms for pod "kube-apiserver-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.510427   57270 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.524717   57270 pod_ready.go:92] pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.524737   57270 pod_ready.go:81] duration metric: took 14.302445ms for pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.524747   57270 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-24vhc" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.891005   57270 pod_ready.go:92] pod "kube-proxy-24vhc" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.891029   57270 pod_ready.go:81] duration metric: took 366.275947ms for pod "kube-proxy-24vhc" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.891039   57270 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:06.291050   57270 pod_ready.go:92] pod "kube-scheduler-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:06.291075   57270 pod_ready.go:81] duration metric: took 400.028808ms for pod "kube-scheduler-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:06.291084   57270 pod_ready.go:38] duration metric: took 3.309617471s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:55:06.291101   57270 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:55:06.291165   57270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:55:06.308433   57270 api_server.go:72] duration metric: took 3.600549626s to wait for apiserver process to appear ...
	I0410 22:55:06.308461   57270 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:55:06.308479   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:55:06.312630   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 200:
	ok
	I0410 22:55:06.313434   57270 api_server.go:141] control plane version: v1.30.0-rc.1
	I0410 22:55:06.313457   57270 api_server.go:131] duration metric: took 4.989017ms to wait for apiserver health ...
	I0410 22:55:06.313466   57270 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:55:06.494780   57270 system_pods.go:59] 9 kube-system pods found
	I0410 22:55:06.494813   57270 system_pods.go:61] "coredns-7db6d8ff4d-jm2zw" [9d8b995c-717e-43a5-a963-f07a4f7a76a8] Running
	I0410 22:55:06.494820   57270 system_pods.go:61] "coredns-7db6d8ff4d-v599p" [f30c2827-5930-41d4-82b7-edfb839b3a74] Running
	I0410 22:55:06.494826   57270 system_pods.go:61] "etcd-no-preload-646133" [43f97c7f-c75c-4af4-80c1-11194210d8dd] Running
	I0410 22:55:06.494833   57270 system_pods.go:61] "kube-apiserver-no-preload-646133" [ca38242e-c714-49f7-a2df-3f26c6c37d44] Running
	I0410 22:55:06.494838   57270 system_pods.go:61] "kube-controller-manager-no-preload-646133" [a4c79943-eacf-46a5-b57a-f262c7dc97ef] Running
	I0410 22:55:06.494843   57270 system_pods.go:61] "kube-proxy-24vhc" [ca175e85-76f2-47d2-91a5-0248194a88e8] Running
	I0410 22:55:06.494848   57270 system_pods.go:61] "kube-scheduler-no-preload-646133" [fb5f38f5-0c9d-4176-8b3e-4d8c5f71c5cf] Running
	I0410 22:55:06.494856   57270 system_pods.go:61] "metrics-server-569cc877fc-bj59f" [4aace435-90be-456a-8a85-dbee0026212c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:55:06.494862   57270 system_pods.go:61] "storage-provisioner" [3232daa9-da88-4152-97c8-e86b3d50b0b8] Running
	I0410 22:55:06.494871   57270 system_pods.go:74] duration metric: took 181.399385ms to wait for pod list to return data ...
	I0410 22:55:06.494890   57270 default_sa.go:34] waiting for default service account to be created ...
	I0410 22:55:06.690158   57270 default_sa.go:45] found service account: "default"
	I0410 22:55:06.690185   57270 default_sa.go:55] duration metric: took 195.289153ms for default service account to be created ...
	I0410 22:55:06.690194   57270 system_pods.go:116] waiting for k8s-apps to be running ...
	I0410 22:55:06.893604   57270 system_pods.go:86] 9 kube-system pods found
	I0410 22:55:06.893632   57270 system_pods.go:89] "coredns-7db6d8ff4d-jm2zw" [9d8b995c-717e-43a5-a963-f07a4f7a76a8] Running
	I0410 22:55:06.893638   57270 system_pods.go:89] "coredns-7db6d8ff4d-v599p" [f30c2827-5930-41d4-82b7-edfb839b3a74] Running
	I0410 22:55:06.893642   57270 system_pods.go:89] "etcd-no-preload-646133" [43f97c7f-c75c-4af4-80c1-11194210d8dd] Running
	I0410 22:55:06.893646   57270 system_pods.go:89] "kube-apiserver-no-preload-646133" [ca38242e-c714-49f7-a2df-3f26c6c37d44] Running
	I0410 22:55:06.893651   57270 system_pods.go:89] "kube-controller-manager-no-preload-646133" [a4c79943-eacf-46a5-b57a-f262c7dc97ef] Running
	I0410 22:55:06.893656   57270 system_pods.go:89] "kube-proxy-24vhc" [ca175e85-76f2-47d2-91a5-0248194a88e8] Running
	I0410 22:55:06.893659   57270 system_pods.go:89] "kube-scheduler-no-preload-646133" [fb5f38f5-0c9d-4176-8b3e-4d8c5f71c5cf] Running
	I0410 22:55:06.893665   57270 system_pods.go:89] "metrics-server-569cc877fc-bj59f" [4aace435-90be-456a-8a85-dbee0026212c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:55:06.893670   57270 system_pods.go:89] "storage-provisioner" [3232daa9-da88-4152-97c8-e86b3d50b0b8] Running
	I0410 22:55:06.893679   57270 system_pods.go:126] duration metric: took 203.480657ms to wait for k8s-apps to be running ...
	I0410 22:55:06.893686   57270 system_svc.go:44] waiting for kubelet service to be running ....
	I0410 22:55:06.893730   57270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:55:06.909072   57270 system_svc.go:56] duration metric: took 15.374403ms WaitForService to wait for kubelet
	I0410 22:55:06.909096   57270 kubeadm.go:576] duration metric: took 4.20122533s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 22:55:06.909115   57270 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:55:07.090651   57270 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:55:07.090673   57270 node_conditions.go:123] node cpu capacity is 2
	I0410 22:55:07.090682   57270 node_conditions.go:105] duration metric: took 181.563241ms to run NodePressure ...
	I0410 22:55:07.090692   57270 start.go:240] waiting for startup goroutines ...
	I0410 22:55:07.090698   57270 start.go:245] waiting for cluster config update ...
	I0410 22:55:07.090707   57270 start.go:254] writing updated cluster config ...
	I0410 22:55:07.090957   57270 ssh_runner.go:195] Run: rm -f paused
	I0410 22:55:07.140644   57270 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.1 (minor skew: 1)
	I0410 22:55:07.142770   57270 out.go:177] * Done! kubectl is now configured to use "no-preload-646133" cluster and "default" namespace by default
	I0410 22:56:40.435994   57719 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0410 22:56:40.436123   57719 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0410 22:56:40.437810   57719 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0410 22:56:40.437872   57719 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 22:56:40.437967   57719 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 22:56:40.438082   57719 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 22:56:40.438235   57719 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 22:56:40.438321   57719 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 22:56:40.440009   57719 out.go:204]   - Generating certificates and keys ...
	I0410 22:56:40.440110   57719 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 22:56:40.440210   57719 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 22:56:40.440336   57719 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0410 22:56:40.440417   57719 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0410 22:56:40.440501   57719 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0410 22:56:40.440563   57719 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0410 22:56:40.440622   57719 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0410 22:56:40.440685   57719 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0410 22:56:40.440752   57719 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0410 22:56:40.440858   57719 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0410 22:56:40.440923   57719 kubeadm.go:309] [certs] Using the existing "sa" key
	I0410 22:56:40.441004   57719 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 22:56:40.441076   57719 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 22:56:40.441131   57719 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 22:56:40.441185   57719 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 22:56:40.441242   57719 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 22:56:40.441375   57719 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 22:56:40.441501   57719 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 22:56:40.441565   57719 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 22:56:40.441658   57719 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 22:56:40.443122   57719 out.go:204]   - Booting up control plane ...
	I0410 22:56:40.443230   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 22:56:40.443332   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 22:56:40.443431   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 22:56:40.443549   57719 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 22:56:40.443710   57719 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0410 22:56:40.443783   57719 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0410 22:56:40.443883   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.444111   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.444200   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.444429   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.444520   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.444761   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.444869   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.445124   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.445235   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.445416   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.445423   57719 kubeadm.go:309] 
	I0410 22:56:40.445465   57719 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0410 22:56:40.445512   57719 kubeadm.go:309] 		timed out waiting for the condition
	I0410 22:56:40.445520   57719 kubeadm.go:309] 
	I0410 22:56:40.445548   57719 kubeadm.go:309] 	This error is likely caused by:
	I0410 22:56:40.445595   57719 kubeadm.go:309] 		- The kubelet is not running
	I0410 22:56:40.445712   57719 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0410 22:56:40.445722   57719 kubeadm.go:309] 
	I0410 22:56:40.445880   57719 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0410 22:56:40.445931   57719 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0410 22:56:40.445967   57719 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0410 22:56:40.445972   57719 kubeadm.go:309] 
	I0410 22:56:40.446095   57719 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0410 22:56:40.446190   57719 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0410 22:56:40.446201   57719 kubeadm.go:309] 
	I0410 22:56:40.446326   57719 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0410 22:56:40.446452   57719 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0410 22:56:40.446548   57719 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0410 22:56:40.446611   57719 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0410 22:56:40.446659   57719 kubeadm.go:309] 
	I0410 22:56:40.446681   57719 kubeadm.go:393] duration metric: took 8m5.163157284s to StartCluster
	I0410 22:56:40.446805   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:56:40.446880   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:56:40.499163   57719 cri.go:89] found id: ""
	I0410 22:56:40.499196   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.499205   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:56:40.499212   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:56:40.499292   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:56:40.545429   57719 cri.go:89] found id: ""
	I0410 22:56:40.545465   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.545473   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:56:40.545479   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:56:40.545538   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:56:40.583842   57719 cri.go:89] found id: ""
	I0410 22:56:40.583870   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.583880   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:56:40.583887   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:56:40.583957   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:56:40.621054   57719 cri.go:89] found id: ""
	I0410 22:56:40.621075   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.621083   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:56:40.621091   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:56:40.621149   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:56:40.665133   57719 cri.go:89] found id: ""
	I0410 22:56:40.665161   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.665168   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:56:40.665175   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:56:40.665231   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:56:40.707490   57719 cri.go:89] found id: ""
	I0410 22:56:40.707519   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.707529   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:56:40.707536   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:56:40.707598   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:56:40.748539   57719 cri.go:89] found id: ""
	I0410 22:56:40.748565   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.748576   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:56:40.748584   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:56:40.748644   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:56:40.792326   57719 cri.go:89] found id: ""
	I0410 22:56:40.792349   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.792358   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:56:40.792366   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:56:40.792376   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:56:40.844309   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:56:40.844346   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:56:40.859678   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:56:40.859715   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:56:40.950099   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:56:40.950123   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:56:40.950141   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:56:41.073547   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:56:41.073589   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0410 22:56:41.124970   57719 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0410 22:56:41.125024   57719 out.go:239] * 
	W0410 22:56:41.125096   57719 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0410 22:56:41.125129   57719 out.go:239] * 
	W0410 22:56:41.126153   57719 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0410 22:56:41.129869   57719 out.go:177] 
	W0410 22:56:41.131207   57719 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0410 22:56:41.131286   57719 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0410 22:56:41.131326   57719 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0410 22:56:41.133049   57719 out.go:177] 
	
	
	==> CRI-O <==
	Apr 10 22:56:42 old-k8s-version-862528 crio[650]: time="2024-04-10 22:56:42.913807462Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712789802913775852,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c20fc7a2-a38d-4560-8d45-95000b7e61b9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:56:42 old-k8s-version-862528 crio[650]: time="2024-04-10 22:56:42.914568223Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1609470f-5e64-41d8-bc39-2903575bed5f name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:56:42 old-k8s-version-862528 crio[650]: time="2024-04-10 22:56:42.914634383Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1609470f-5e64-41d8-bc39-2903575bed5f name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:56:42 old-k8s-version-862528 crio[650]: time="2024-04-10 22:56:42.914680924Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1609470f-5e64-41d8-bc39-2903575bed5f name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:56:42 old-k8s-version-862528 crio[650]: time="2024-04-10 22:56:42.950207947Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e9f2df7e-60f9-4b79-a5b5-db295e858f6a name=/runtime.v1.RuntimeService/Version
	Apr 10 22:56:42 old-k8s-version-862528 crio[650]: time="2024-04-10 22:56:42.950284152Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e9f2df7e-60f9-4b79-a5b5-db295e858f6a name=/runtime.v1.RuntimeService/Version
	Apr 10 22:56:42 old-k8s-version-862528 crio[650]: time="2024-04-10 22:56:42.951577723Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b9577394-c1e3-4b3c-921d-77f7c88738b2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:56:42 old-k8s-version-862528 crio[650]: time="2024-04-10 22:56:42.952009384Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712789802951985243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b9577394-c1e3-4b3c-921d-77f7c88738b2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:56:42 old-k8s-version-862528 crio[650]: time="2024-04-10 22:56:42.952633164Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9dcfab0d-dcbe-49eb-b2cf-e69196ecf9ad name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:56:42 old-k8s-version-862528 crio[650]: time="2024-04-10 22:56:42.952680946Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9dcfab0d-dcbe-49eb-b2cf-e69196ecf9ad name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:56:42 old-k8s-version-862528 crio[650]: time="2024-04-10 22:56:42.952717249Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9dcfab0d-dcbe-49eb-b2cf-e69196ecf9ad name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:56:42 old-k8s-version-862528 crio[650]: time="2024-04-10 22:56:42.988341175Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c7cfa8a2-6ca8-4ace-bbaa-b3366257d3c3 name=/runtime.v1.RuntimeService/Version
	Apr 10 22:56:42 old-k8s-version-862528 crio[650]: time="2024-04-10 22:56:42.988418743Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c7cfa8a2-6ca8-4ace-bbaa-b3366257d3c3 name=/runtime.v1.RuntimeService/Version
	Apr 10 22:56:42 old-k8s-version-862528 crio[650]: time="2024-04-10 22:56:42.989617129Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=431b8b42-a0e6-4eb8-bb75-e3bfa63532de name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:56:42 old-k8s-version-862528 crio[650]: time="2024-04-10 22:56:42.989963056Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712789802989942119,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=431b8b42-a0e6-4eb8-bb75-e3bfa63532de name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:56:42 old-k8s-version-862528 crio[650]: time="2024-04-10 22:56:42.990535188Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=213dea8c-3dcb-4b9c-82df-d5765b5f3932 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:56:42 old-k8s-version-862528 crio[650]: time="2024-04-10 22:56:42.990583346Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=213dea8c-3dcb-4b9c-82df-d5765b5f3932 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:56:42 old-k8s-version-862528 crio[650]: time="2024-04-10 22:56:42.990612559Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=213dea8c-3dcb-4b9c-82df-d5765b5f3932 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:56:43 old-k8s-version-862528 crio[650]: time="2024-04-10 22:56:43.026379031Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=588aeda3-4b30-4007-bb91-6888006ff579 name=/runtime.v1.RuntimeService/Version
	Apr 10 22:56:43 old-k8s-version-862528 crio[650]: time="2024-04-10 22:56:43.026479554Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=588aeda3-4b30-4007-bb91-6888006ff579 name=/runtime.v1.RuntimeService/Version
	Apr 10 22:56:43 old-k8s-version-862528 crio[650]: time="2024-04-10 22:56:43.027752201Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=09b901ac-9e81-4069-9fe8-781431d3f745 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:56:43 old-k8s-version-862528 crio[650]: time="2024-04-10 22:56:43.028294752Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712789803028261938,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=09b901ac-9e81-4069-9fe8-781431d3f745 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 22:56:43 old-k8s-version-862528 crio[650]: time="2024-04-10 22:56:43.029099696Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a8bde70-e524-479e-9395-a97129c22cc0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:56:43 old-k8s-version-862528 crio[650]: time="2024-04-10 22:56:43.029214900Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a8bde70-e524-479e-9395-a97129c22cc0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 22:56:43 old-k8s-version-862528 crio[650]: time="2024-04-10 22:56:43.029275918Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0a8bde70-e524-479e-9395-a97129c22cc0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr10 22:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052439] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041651] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.553485] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.712541] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.654645] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.367023] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.061213] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068973] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.198082] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.121287] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.251878] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +6.515656] systemd-fstab-generator[837]: Ignoring "noauto" option for root device
	[  +0.064093] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.589961] systemd-fstab-generator[961]: Ignoring "noauto" option for root device
	[ +11.062720] kauditd_printk_skb: 46 callbacks suppressed
	[Apr10 22:52] systemd-fstab-generator[4966]: Ignoring "noauto" option for root device
	[Apr10 22:54] systemd-fstab-generator[5254]: Ignoring "noauto" option for root device
	[  +0.070219] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 22:56:43 up 8 min,  0 users,  load average: 0.05, 0.13, 0.08
	Linux old-k8s-version-862528 5.10.207 #1 SMP Wed Apr 10 14:57:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 10 22:56:40 old-k8s-version-862528 kubelet[5429]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Apr 10 22:56:40 old-k8s-version-862528 kubelet[5429]: goroutine 145 [runnable]:
	Apr 10 22:56:40 old-k8s-version-862528 kubelet[5429]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc000855180)
	Apr 10 22:56:40 old-k8s-version-862528 kubelet[5429]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1242
	Apr 10 22:56:40 old-k8s-version-862528 kubelet[5429]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Apr 10 22:56:40 old-k8s-version-862528 kubelet[5429]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Apr 10 22:56:40 old-k8s-version-862528 kubelet[5429]: goroutine 146 [select]:
	Apr 10 22:56:40 old-k8s-version-862528 kubelet[5429]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000b305a0, 0xc000b0b501, 0xc000965080, 0xc0009a51a0, 0xc000b32400, 0xc000b323c0)
	Apr 10 22:56:40 old-k8s-version-862528 kubelet[5429]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Apr 10 22:56:40 old-k8s-version-862528 kubelet[5429]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000b0b5c0, 0x0, 0x0)
	Apr 10 22:56:40 old-k8s-version-862528 kubelet[5429]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Apr 10 22:56:40 old-k8s-version-862528 kubelet[5429]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000855180)
	Apr 10 22:56:40 old-k8s-version-862528 kubelet[5429]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Apr 10 22:56:40 old-k8s-version-862528 kubelet[5429]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Apr 10 22:56:40 old-k8s-version-862528 kubelet[5429]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Apr 10 22:56:40 old-k8s-version-862528 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 10 22:56:40 old-k8s-version-862528 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 10 22:56:40 old-k8s-version-862528 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Apr 10 22:56:40 old-k8s-version-862528 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 10 22:56:40 old-k8s-version-862528 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 10 22:56:41 old-k8s-version-862528 kubelet[5487]: I0410 22:56:41.069277    5487 server.go:416] Version: v1.20.0
	Apr 10 22:56:41 old-k8s-version-862528 kubelet[5487]: I0410 22:56:41.069673    5487 server.go:837] Client rotation is on, will bootstrap in background
	Apr 10 22:56:41 old-k8s-version-862528 kubelet[5487]: I0410 22:56:41.072075    5487 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 10 22:56:41 old-k8s-version-862528 kubelet[5487]: W0410 22:56:41.073060    5487 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 10 22:56:41 old-k8s-version-862528 kubelet[5487]: I0410 22:56:41.073328    5487 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-862528 -n old-k8s-version-862528
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-862528 -n old-k8s-version-862528: exit status 2 (256.111856ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-862528" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (764.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-519831 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-519831 --alsologtostderr -v=3: exit status 82 (2m0.538737308s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-519831"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0410 22:44:15.915444   57903 out.go:291] Setting OutFile to fd 1 ...
	I0410 22:44:15.915678   57903 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:44:15.915686   57903 out.go:304] Setting ErrFile to fd 2...
	I0410 22:44:15.915691   57903 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:44:15.915853   57903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 22:44:15.916080   57903 out.go:298] Setting JSON to false
	I0410 22:44:15.916153   57903 mustload.go:65] Loading cluster: default-k8s-diff-port-519831
	I0410 22:44:15.916469   57903 config.go:182] Loaded profile config "default-k8s-diff-port-519831": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:44:15.916533   57903 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/config.json ...
	I0410 22:44:15.916705   57903 mustload.go:65] Loading cluster: default-k8s-diff-port-519831
	I0410 22:44:15.916805   57903 config.go:182] Loaded profile config "default-k8s-diff-port-519831": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:44:15.916834   57903 stop.go:39] StopHost: default-k8s-diff-port-519831
	I0410 22:44:15.917251   57903 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:44:15.917289   57903 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:44:15.931675   57903 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44849
	I0410 22:44:15.932210   57903 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:44:15.932833   57903 main.go:141] libmachine: Using API Version  1
	I0410 22:44:15.932855   57903 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:44:15.933220   57903 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:44:15.935818   57903 out.go:177] * Stopping node "default-k8s-diff-port-519831"  ...
	I0410 22:44:15.937326   57903 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0410 22:44:15.937363   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:44:15.937638   57903 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0410 22:44:15.937665   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:44:15.940394   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:44:15.940861   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:43:23 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:44:15.940892   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:44:15.941056   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:44:15.941260   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:44:15.941428   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:44:15.941573   57903 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:44:16.047774   57903 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0410 22:44:16.106101   57903 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0410 22:44:16.187108   57903 main.go:141] libmachine: Stopping "default-k8s-diff-port-519831"...
	I0410 22:44:16.187151   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetState
	I0410 22:44:16.188982   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Stop
	I0410 22:44:16.192739   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 0/120
	I0410 22:44:17.194979   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 1/120
	I0410 22:44:18.196209   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 2/120
	I0410 22:44:19.197933   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 3/120
	I0410 22:44:20.199538   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 4/120
	I0410 22:44:21.201739   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 5/120
	I0410 22:44:22.203334   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 6/120
	I0410 22:44:23.204739   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 7/120
	I0410 22:44:24.206308   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 8/120
	I0410 22:44:25.208136   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 9/120
	I0410 22:44:26.209589   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 10/120
	I0410 22:44:27.211186   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 11/120
	I0410 22:44:28.212694   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 12/120
	I0410 22:44:29.214223   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 13/120
	I0410 22:44:30.215894   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 14/120
	I0410 22:44:31.218228   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 15/120
	I0410 22:44:32.220051   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 16/120
	I0410 22:44:33.221909   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 17/120
	I0410 22:44:34.223299   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 18/120
	I0410 22:44:35.224819   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 19/120
	I0410 22:44:36.227361   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 20/120
	I0410 22:44:37.229139   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 21/120
	I0410 22:44:38.230823   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 22/120
	I0410 22:44:39.232275   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 23/120
	I0410 22:44:40.233972   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 24/120
	I0410 22:44:41.236117   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 25/120
	I0410 22:44:42.237557   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 26/120
	I0410 22:44:43.238914   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 27/120
	I0410 22:44:44.240443   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 28/120
	I0410 22:44:45.241897   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 29/120
	I0410 22:44:46.244261   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 30/120
	I0410 22:44:47.245610   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 31/120
	I0410 22:44:48.246967   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 32/120
	I0410 22:44:49.248342   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 33/120
	I0410 22:44:50.249793   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 34/120
	I0410 22:44:51.251957   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 35/120
	I0410 22:44:52.253495   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 36/120
	I0410 22:44:53.255011   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 37/120
	I0410 22:44:54.256750   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 38/120
	I0410 22:44:55.258198   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 39/120
	I0410 22:44:56.259793   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 40/120
	I0410 22:44:57.261237   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 41/120
	I0410 22:44:58.262750   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 42/120
	I0410 22:44:59.264287   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 43/120
	I0410 22:45:00.265754   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 44/120
	I0410 22:45:01.268151   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 45/120
	I0410 22:45:02.269832   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 46/120
	I0410 22:45:03.271379   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 47/120
	I0410 22:45:04.272842   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 48/120
	I0410 22:45:05.274292   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 49/120
	I0410 22:45:06.276635   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 50/120
	I0410 22:45:07.278096   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 51/120
	I0410 22:45:08.279690   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 52/120
	I0410 22:45:09.281157   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 53/120
	I0410 22:45:10.282766   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 54/120
	I0410 22:45:11.285148   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 55/120
	I0410 22:45:12.286805   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 56/120
	I0410 22:45:13.288481   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 57/120
	I0410 22:45:14.289902   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 58/120
	I0410 22:45:15.291227   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 59/120
	I0410 22:45:16.293507   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 60/120
	I0410 22:45:17.294838   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 61/120
	I0410 22:45:18.296781   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 62/120
	I0410 22:45:19.298492   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 63/120
	I0410 22:45:20.299904   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 64/120
	I0410 22:45:21.301936   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 65/120
	I0410 22:45:22.303295   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 66/120
	I0410 22:45:23.304776   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 67/120
	I0410 22:45:24.306147   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 68/120
	I0410 22:45:25.307765   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 69/120
	I0410 22:45:26.310059   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 70/120
	I0410 22:45:27.311499   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 71/120
	I0410 22:45:28.313306   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 72/120
	I0410 22:45:29.314741   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 73/120
	I0410 22:45:30.316324   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 74/120
	I0410 22:45:31.318702   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 75/120
	I0410 22:45:32.320210   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 76/120
	I0410 22:45:33.321949   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 77/120
	I0410 22:45:34.323185   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 78/120
	I0410 22:45:35.324919   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 79/120
	I0410 22:45:36.326886   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 80/120
	I0410 22:45:37.328231   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 81/120
	I0410 22:45:38.329969   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 82/120
	I0410 22:45:39.331503   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 83/120
	I0410 22:45:40.333165   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 84/120
	I0410 22:45:41.335079   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 85/120
	I0410 22:45:42.336386   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 86/120
	I0410 22:45:43.337756   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 87/120
	I0410 22:45:44.339215   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 88/120
	I0410 22:45:45.340540   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 89/120
	I0410 22:45:46.342670   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 90/120
	I0410 22:45:47.344855   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 91/120
	I0410 22:45:48.346817   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 92/120
	I0410 22:45:49.348505   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 93/120
	I0410 22:45:50.350172   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 94/120
	I0410 22:45:51.352336   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 95/120
	I0410 22:45:52.353994   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 96/120
	I0410 22:45:53.355533   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 97/120
	I0410 22:45:54.356940   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 98/120
	I0410 22:45:55.358544   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 99/120
	I0410 22:45:56.361272   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 100/120
	I0410 22:45:57.362935   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 101/120
	I0410 22:45:58.364228   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 102/120
	I0410 22:45:59.365929   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 103/120
	I0410 22:46:00.367463   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 104/120
	I0410 22:46:01.369719   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 105/120
	I0410 22:46:02.371153   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 106/120
	I0410 22:46:03.372582   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 107/120
	I0410 22:46:04.374045   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 108/120
	I0410 22:46:05.375449   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 109/120
	I0410 22:46:06.377823   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 110/120
	I0410 22:46:07.379354   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 111/120
	I0410 22:46:08.380696   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 112/120
	I0410 22:46:09.382296   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 113/120
	I0410 22:46:10.383691   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 114/120
	I0410 22:46:11.386181   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 115/120
	I0410 22:46:12.387682   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 116/120
	I0410 22:46:13.389179   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 117/120
	I0410 22:46:14.390730   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 118/120
	I0410 22:46:15.392302   57903 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for machine to stop 119/120
	I0410 22:46:16.393461   57903 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0410 22:46:16.393525   57903 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0410 22:46:16.395328   57903 out.go:177] 
	W0410 22:46:16.396578   57903 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0410 22:46:16.396591   57903 out.go:239] * 
	* 
	W0410 22:46:16.399031   57903 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0410 22:46:16.400320   57903 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-519831 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-519831 -n default-k8s-diff-port-519831
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-519831 -n default-k8s-diff-port-519831: exit status 3 (18.555178326s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0410 22:46:34.956743   58497 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.170:22: connect: no route to host
	E0410 22:46:34.956779   58497 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.170:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-519831" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-706500 -n embed-certs-706500
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-706500 -n embed-certs-706500: exit status 3 (3.168443694s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0410 22:44:47.788707   58050 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.10:22: connect: no route to host
	E0410 22:44:47.788727   58050 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.10:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-706500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-706500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152916736s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.10:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-706500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-706500 -n embed-certs-706500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-706500 -n embed-certs-706500: exit status 3 (3.062449901s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0410 22:44:57.004806   58140 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.10:22: connect: no route to host
	E0410 22:44:57.004827   58140 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.10:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-706500" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-519831 -n default-k8s-diff-port-519831
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-519831 -n default-k8s-diff-port-519831: exit status 3 (3.168008607s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0410 22:46:38.124766   58591 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.170:22: connect: no route to host
	E0410 22:46:38.124795   58591 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.170:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-519831 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-519831 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15331291s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.170:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-519831 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-519831 -n default-k8s-diff-port-519831
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-519831 -n default-k8s-diff-port-519831: exit status 3 (3.062334515s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0410 22:46:47.340872   58671 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.170:22: connect: no route to host
	E0410 22:46:47.340898   58671 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.170:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-519831" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-519831 -n default-k8s-diff-port-519831
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-10 23:02:49.736900975 +0000 UTC m=+5691.973330265
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-519831 -n default-k8s-diff-port-519831
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-519831 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-519831 logs -n 25: (2.139595424s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| stop    | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC | 10 Apr 24 22:40 UTC |
	| start   | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC | 10 Apr 24 22:40 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.1                      |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-646133             | no-preload-646133            | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC | 10 Apr 24 22:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-646133                                   | no-preload-646133            | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC | 10 Apr 24 22:41 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.1                      |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:41 UTC | 10 Apr 24 22:41 UTC |
	| start   | -p embed-certs-706500                                  | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:41 UTC | 10 Apr 24 22:42 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-706500            | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:42 UTC | 10 Apr 24 22:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-706500                                  | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-862528        | old-k8s-version-862528       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:42 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-646133                  | no-preload-646133            | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| delete  | -p cert-expiration-464519                              | cert-expiration-464519       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:43 UTC |
	| delete  | -p                                                     | disable-driver-mounts-676292 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:43 UTC |
	|         | disable-driver-mounts-676292                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:44 UTC |
	|         | default-k8s-diff-port-519831                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| start   | -p no-preload-646133                                   | no-preload-646133            | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:55 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.1                      |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-862528                              | old-k8s-version-862528       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-862528             | old-k8s-version-862528       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC | 10 Apr 24 22:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-862528                              | old-k8s-version-862528       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-519831  | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC | 10 Apr 24 22:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC |                     |
	|         | default-k8s-diff-port-519831                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-706500                 | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-706500                                  | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC | 10 Apr 24 22:54 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-519831       | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:46 UTC | 10 Apr 24 22:53 UTC |
	|         | default-k8s-diff-port-519831                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/10 22:46:47
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0410 22:46:47.395706   58701 out.go:291] Setting OutFile to fd 1 ...
	I0410 22:46:47.395991   58701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:46:47.396002   58701 out.go:304] Setting ErrFile to fd 2...
	I0410 22:46:47.396019   58701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:46:47.396208   58701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 22:46:47.396802   58701 out.go:298] Setting JSON to false
	I0410 22:46:47.397726   58701 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5350,"bootTime":1712783858,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0410 22:46:47.397786   58701 start.go:139] virtualization: kvm guest
	I0410 22:46:47.400191   58701 out.go:177] * [default-k8s-diff-port-519831] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0410 22:46:47.401578   58701 notify.go:220] Checking for updates...
	I0410 22:46:47.402880   58701 out.go:177]   - MINIKUBE_LOCATION=18610
	I0410 22:46:47.404311   58701 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0410 22:46:47.405790   58701 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:46:47.407012   58701 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 22:46:47.408130   58701 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0410 22:46:47.409497   58701 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0410 22:46:47.411183   58701 config.go:182] Loaded profile config "default-k8s-diff-port-519831": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:46:47.411591   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:46:47.411632   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:46:47.426322   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42887
	I0410 22:46:47.426759   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:46:47.427345   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:46:47.427366   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:46:47.427716   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:46:47.427926   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:46:47.428221   58701 driver.go:392] Setting default libvirt URI to qemu:///system
	I0410 22:46:47.428646   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:46:47.428696   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:46:47.444105   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40099
	I0410 22:46:47.444537   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:46:47.445035   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:46:47.445058   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:46:47.445398   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:46:47.445592   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:46:47.480451   58701 out.go:177] * Using the kvm2 driver based on existing profile
	I0410 22:46:47.481837   58701 start.go:297] selected driver: kvm2
	I0410 22:46:47.481852   58701 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-519831 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-519831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.170 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:46:47.481985   58701 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0410 22:46:47.482657   58701 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 22:46:47.482750   58701 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18610-5679/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0410 22:46:47.498330   58701 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0410 22:46:47.498668   58701 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 22:46:47.498735   58701 cni.go:84] Creating CNI manager for ""
	I0410 22:46:47.498748   58701 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:46:47.498784   58701 start.go:340] cluster config:
	{Name:default-k8s-diff-port-519831 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-519831 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.170 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:46:47.498877   58701 iso.go:125] acquiring lock: {Name:mk5a268a8a4dbf02629446a098c1de60574dce10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 22:46:47.500723   58701 out.go:177] * Starting "default-k8s-diff-port-519831" primary control-plane node in "default-k8s-diff-port-519831" cluster
	I0410 22:46:47.180678   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:46:47.501967   58701 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 22:46:47.502009   58701 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0410 22:46:47.502030   58701 cache.go:56] Caching tarball of preloaded images
	I0410 22:46:47.502108   58701 preload.go:173] Found /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0410 22:46:47.502118   58701 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0410 22:46:47.502202   58701 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/config.json ...
	I0410 22:46:47.502366   58701 start.go:360] acquireMachinesLock for default-k8s-diff-port-519831: {Name:mkcfcfa8b01b63726303cd37d33f01d452118635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0410 22:46:50.252732   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:46:56.332647   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:46:59.404660   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:05.484717   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:08.556632   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:14.636753   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:17.708788   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:23.788661   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:26.860683   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:32.940630   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:36.012689   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:42.092749   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:45.164706   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:51.244682   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:54.316652   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:48:00.396637   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:48:03.468672   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:48:06.472768   57719 start.go:364] duration metric: took 4m5.937893783s to acquireMachinesLock for "old-k8s-version-862528"
	I0410 22:48:06.472833   57719 start.go:96] Skipping create...Using existing machine configuration
	I0410 22:48:06.472852   57719 fix.go:54] fixHost starting: 
	I0410 22:48:06.473157   57719 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:48:06.473186   57719 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:48:06.488728   57719 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41927
	I0410 22:48:06.489157   57719 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:48:06.489590   57719 main.go:141] libmachine: Using API Version  1
	I0410 22:48:06.489612   57719 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:48:06.490011   57719 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:48:06.490171   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:06.490337   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetState
	I0410 22:48:06.491997   57719 fix.go:112] recreateIfNeeded on old-k8s-version-862528: state=Stopped err=<nil>
	I0410 22:48:06.492030   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	W0410 22:48:06.492234   57719 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 22:48:06.493891   57719 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-862528" ...
	I0410 22:48:06.469869   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:48:06.469904   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetMachineName
	I0410 22:48:06.470235   57270 buildroot.go:166] provisioning hostname "no-preload-646133"
	I0410 22:48:06.470261   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetMachineName
	I0410 22:48:06.470529   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:48:06.472589   57270 machine.go:97] duration metric: took 4m35.561692081s to provisionDockerMachine
	I0410 22:48:06.472636   57270 fix.go:56] duration metric: took 4m35.586484815s for fixHost
	I0410 22:48:06.472646   57270 start.go:83] releasing machines lock for "no-preload-646133", held for 4m35.586540892s
	W0410 22:48:06.472671   57270 start.go:713] error starting host: provision: host is not running
	W0410 22:48:06.472773   57270 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0410 22:48:06.472785   57270 start.go:728] Will try again in 5 seconds ...
	I0410 22:48:06.495233   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .Start
	I0410 22:48:06.495416   57719 main.go:141] libmachine: (old-k8s-version-862528) Ensuring networks are active...
	I0410 22:48:06.496254   57719 main.go:141] libmachine: (old-k8s-version-862528) Ensuring network default is active
	I0410 22:48:06.496589   57719 main.go:141] libmachine: (old-k8s-version-862528) Ensuring network mk-old-k8s-version-862528 is active
	I0410 22:48:06.497002   57719 main.go:141] libmachine: (old-k8s-version-862528) Getting domain xml...
	I0410 22:48:06.497751   57719 main.go:141] libmachine: (old-k8s-version-862528) Creating domain...
	I0410 22:48:07.722703   57719 main.go:141] libmachine: (old-k8s-version-862528) Waiting to get IP...
	I0410 22:48:07.723942   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:07.724373   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:07.724451   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:07.724338   59021 retry.go:31] will retry after 284.455366ms: waiting for machine to come up
	I0410 22:48:08.011077   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:08.011598   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:08.011628   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:08.011545   59021 retry.go:31] will retry after 337.946102ms: waiting for machine to come up
	I0410 22:48:08.351219   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:08.351725   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:08.351744   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:08.351691   59021 retry.go:31] will retry after 454.774669ms: waiting for machine to come up
	I0410 22:48:08.808516   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:08.808953   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:08.808991   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:08.808893   59021 retry.go:31] will retry after 484.667282ms: waiting for machine to come up
	I0410 22:48:09.295665   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:09.296127   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:09.296148   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:09.296083   59021 retry.go:31] will retry after 515.00238ms: waiting for machine to come up
	I0410 22:48:09.812855   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:09.813337   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:09.813362   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:09.813289   59021 retry.go:31] will retry after 596.67118ms: waiting for machine to come up
	I0410 22:48:10.411103   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:10.411616   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:10.411640   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:10.411568   59021 retry.go:31] will retry after 1.035822512s: waiting for machine to come up
	I0410 22:48:11.473748   57270 start.go:360] acquireMachinesLock for no-preload-646133: {Name:mkcfcfa8b01b63726303cd37d33f01d452118635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0410 22:48:11.448894   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:11.449358   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:11.449388   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:11.449315   59021 retry.go:31] will retry after 1.258446774s: waiting for machine to come up
	I0410 22:48:12.709048   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:12.709587   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:12.709618   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:12.709530   59021 retry.go:31] will retry after 1.149380432s: waiting for machine to come up
	I0410 22:48:13.860550   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:13.861084   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:13.861110   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:13.861028   59021 retry.go:31] will retry after 1.733388735s: waiting for machine to come up
	I0410 22:48:15.595870   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:15.596447   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:15.596487   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:15.596343   59021 retry.go:31] will retry after 2.536794123s: waiting for machine to come up
	I0410 22:48:18.135592   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:18.136099   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:18.136128   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:18.136056   59021 retry.go:31] will retry after 3.390395523s: waiting for machine to come up
	I0410 22:48:21.528518   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:21.528964   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:21.529008   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:21.528906   59021 retry.go:31] will retry after 4.165145769s: waiting for machine to come up
	I0410 22:48:26.977460   58186 start.go:364] duration metric: took 3m29.815175662s to acquireMachinesLock for "embed-certs-706500"
	I0410 22:48:26.977524   58186 start.go:96] Skipping create...Using existing machine configuration
	I0410 22:48:26.977532   58186 fix.go:54] fixHost starting: 
	I0410 22:48:26.977935   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:48:26.977965   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:48:26.994175   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36291
	I0410 22:48:26.994552   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:48:26.995016   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:48:26.995040   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:48:26.995447   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:48:26.995652   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:26.995826   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetState
	I0410 22:48:26.997547   58186 fix.go:112] recreateIfNeeded on embed-certs-706500: state=Stopped err=<nil>
	I0410 22:48:26.997580   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	W0410 22:48:26.997902   58186 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 22:48:27.000500   58186 out.go:177] * Restarting existing kvm2 VM for "embed-certs-706500" ...
	I0410 22:48:27.002204   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Start
	I0410 22:48:27.002398   58186 main.go:141] libmachine: (embed-certs-706500) Ensuring networks are active...
	I0410 22:48:27.003133   58186 main.go:141] libmachine: (embed-certs-706500) Ensuring network default is active
	I0410 22:48:27.003465   58186 main.go:141] libmachine: (embed-certs-706500) Ensuring network mk-embed-certs-706500 is active
	I0410 22:48:27.003863   58186 main.go:141] libmachine: (embed-certs-706500) Getting domain xml...
	I0410 22:48:27.004603   58186 main.go:141] libmachine: (embed-certs-706500) Creating domain...
	I0410 22:48:25.699595   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.700129   57719 main.go:141] libmachine: (old-k8s-version-862528) Found IP for machine: 192.168.61.178
	I0410 22:48:25.700159   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has current primary IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.700166   57719 main.go:141] libmachine: (old-k8s-version-862528) Reserving static IP address...
	I0410 22:48:25.700654   57719 main.go:141] libmachine: (old-k8s-version-862528) Reserved static IP address: 192.168.61.178
	I0410 22:48:25.700676   57719 main.go:141] libmachine: (old-k8s-version-862528) Waiting for SSH to be available...
	I0410 22:48:25.700704   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "old-k8s-version-862528", mac: "52:54:00:d0:b7:c9", ip: "192.168.61.178"} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.700732   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | skip adding static IP to network mk-old-k8s-version-862528 - found existing host DHCP lease matching {name: "old-k8s-version-862528", mac: "52:54:00:d0:b7:c9", ip: "192.168.61.178"}
	I0410 22:48:25.700745   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | Getting to WaitForSSH function...
	I0410 22:48:25.702929   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.703290   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.703322   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.703490   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | Using SSH client type: external
	I0410 22:48:25.703519   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | Using SSH private key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa (-rw-------)
	I0410 22:48:25.703551   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.178 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0410 22:48:25.703590   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | About to run SSH command:
	I0410 22:48:25.703635   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | exit 0
	I0410 22:48:25.832738   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | SSH cmd err, output: <nil>: 
	I0410 22:48:25.833133   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetConfigRaw
	I0410 22:48:25.833784   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetIP
	I0410 22:48:25.836323   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.836874   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.836908   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.837156   57719 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/config.json ...
	I0410 22:48:25.837472   57719 machine.go:94] provisionDockerMachine start ...
	I0410 22:48:25.837502   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:25.837710   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:25.840159   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.840488   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.840516   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.840593   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:25.840815   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:25.840992   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:25.841134   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:25.841337   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:25.841543   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:25.841556   57719 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 22:48:25.957153   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0410 22:48:25.957189   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetMachineName
	I0410 22:48:25.957438   57719 buildroot.go:166] provisioning hostname "old-k8s-version-862528"
	I0410 22:48:25.957461   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetMachineName
	I0410 22:48:25.957679   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:25.960779   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.961149   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.961184   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.961332   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:25.961546   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:25.961689   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:25.961864   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:25.962020   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:25.962196   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:25.962207   57719 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-862528 && echo "old-k8s-version-862528" | sudo tee /etc/hostname
	I0410 22:48:26.087073   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-862528
	
	I0410 22:48:26.087099   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.089770   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.090109   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.090140   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.090261   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.090446   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.090623   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.090760   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.090951   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:26.091131   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:26.091155   57719 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-862528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-862528/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-862528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:48:26.214422   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:48:26.214462   57719 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:48:26.214490   57719 buildroot.go:174] setting up certificates
	I0410 22:48:26.214498   57719 provision.go:84] configureAuth start
	I0410 22:48:26.214509   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetMachineName
	I0410 22:48:26.214793   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetIP
	I0410 22:48:26.217463   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.217809   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.217850   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.217975   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.219971   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.220235   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.220265   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.220480   57719 provision.go:143] copyHostCerts
	I0410 22:48:26.220526   57719 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:48:26.220542   57719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:48:26.220604   57719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:48:26.220703   57719 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:48:26.220712   57719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:48:26.220736   57719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:48:26.220789   57719 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:48:26.220796   57719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:48:26.220817   57719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:48:26.220864   57719 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-862528 san=[127.0.0.1 192.168.61.178 localhost minikube old-k8s-version-862528]
	I0410 22:48:26.288372   57719 provision.go:177] copyRemoteCerts
	I0410 22:48:26.288445   57719 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:48:26.288468   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.290980   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.291298   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.291339   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.291444   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.291635   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.291809   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.291927   57719 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa Username:docker}
	I0410 22:48:26.379823   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:48:26.405285   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0410 22:48:26.430122   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0410 22:48:26.456124   57719 provision.go:87] duration metric: took 241.614364ms to configureAuth
	I0410 22:48:26.456154   57719 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:48:26.456356   57719 config.go:182] Loaded profile config "old-k8s-version-862528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0410 22:48:26.456480   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.459028   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.459335   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.459366   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.459558   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.459742   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.459888   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.460037   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.460211   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:26.460379   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:26.460413   57719 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:48:26.732588   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:48:26.732614   57719 machine.go:97] duration metric: took 895.122467ms to provisionDockerMachine
	I0410 22:48:26.732627   57719 start.go:293] postStartSetup for "old-k8s-version-862528" (driver="kvm2")
	I0410 22:48:26.732641   57719 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:48:26.732679   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.733014   57719 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:48:26.733044   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.735820   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.736217   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.736244   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.736418   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.736630   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.736840   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.737020   57719 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa Username:docker}
	I0410 22:48:26.823452   57719 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:48:26.827806   57719 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:48:26.827827   57719 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:48:26.827899   57719 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:48:26.828009   57719 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:48:26.828122   57719 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:48:26.837564   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:48:26.862278   57719 start.go:296] duration metric: took 129.638185ms for postStartSetup
	I0410 22:48:26.862325   57719 fix.go:56] duration metric: took 20.389482643s for fixHost
	I0410 22:48:26.862346   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.864911   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.865277   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.865301   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.865419   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.865597   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.865742   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.865872   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.866083   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:26.866283   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:26.866300   57719 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 22:48:26.977317   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712789306.948982315
	
	I0410 22:48:26.977337   57719 fix.go:216] guest clock: 1712789306.948982315
	I0410 22:48:26.977344   57719 fix.go:229] Guest: 2024-04-10 22:48:26.948982315 +0000 UTC Remote: 2024-04-10 22:48:26.862329953 +0000 UTC m=+266.486936912 (delta=86.652362ms)
	I0410 22:48:26.977362   57719 fix.go:200] guest clock delta is within tolerance: 86.652362ms
	I0410 22:48:26.977366   57719 start.go:83] releasing machines lock for "old-k8s-version-862528", held for 20.504554043s
	I0410 22:48:26.977386   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.977653   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetIP
	I0410 22:48:26.980035   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.980376   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.980419   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.980602   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.981224   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.981421   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.981516   57719 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:48:26.981558   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.981645   57719 ssh_runner.go:195] Run: cat /version.json
	I0410 22:48:26.981670   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.984375   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.984568   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.984840   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.984868   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.984953   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.985030   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.985079   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.985118   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.985236   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.985277   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.985374   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.985450   57719 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa Username:docker}
	I0410 22:48:26.985516   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.985635   57719 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa Username:docker}
	I0410 22:48:27.105002   57719 ssh_runner.go:195] Run: systemctl --version
	I0410 22:48:27.111205   57719 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:48:27.261678   57719 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 22:48:27.268336   57719 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:48:27.268423   57719 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:48:27.290099   57719 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0410 22:48:27.290122   57719 start.go:494] detecting cgroup driver to use...
	I0410 22:48:27.290174   57719 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:48:27.308787   57719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:48:27.325557   57719 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:48:27.325611   57719 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:48:27.340859   57719 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:48:27.355570   57719 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:48:27.479670   57719 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:48:27.653364   57719 docker.go:233] disabling docker service ...
	I0410 22:48:27.653424   57719 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:48:27.669775   57719 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:48:27.683654   57719 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:48:27.813212   57719 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:48:27.929620   57719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:48:27.946085   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:48:27.966341   57719 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0410 22:48:27.966404   57719 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:27.978022   57719 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:48:27.978111   57719 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:27.989324   57719 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:28.001429   57719 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:28.012965   57719 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:48:28.024663   57719 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:48:28.034362   57719 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0410 22:48:28.034423   57719 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0410 22:48:28.048740   57719 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:48:28.060698   57719 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:48:28.188526   57719 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:48:28.348442   57719 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 22:48:28.348523   57719 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 22:48:28.353501   57719 start.go:562] Will wait 60s for crictl version
	I0410 22:48:28.353566   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:28.357486   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 22:48:28.391138   57719 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 22:48:28.391221   57719 ssh_runner.go:195] Run: crio --version
	I0410 22:48:28.421399   57719 ssh_runner.go:195] Run: crio --version
	I0410 22:48:28.455851   57719 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0410 22:48:28.457534   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetIP
	I0410 22:48:28.460913   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:28.461297   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:28.461323   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:28.461558   57719 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0410 22:48:28.466450   57719 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:48:28.480549   57719 kubeadm.go:877] updating cluster {Name:old-k8s-version-862528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-862528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.178 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 22:48:28.480671   57719 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0410 22:48:28.480775   57719 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:48:28.536971   57719 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0410 22:48:28.537034   57719 ssh_runner.go:195] Run: which lz4
	I0410 22:48:28.541757   57719 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0410 22:48:28.546381   57719 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0410 22:48:28.546413   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0410 22:48:30.411805   57719 crio.go:462] duration metric: took 1.870076139s to copy over tarball
	I0410 22:48:30.411900   57719 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0410 22:48:28.229217   58186 main.go:141] libmachine: (embed-certs-706500) Waiting to get IP...
	I0410 22:48:28.230257   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:28.230673   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:28.230724   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:28.230643   59155 retry.go:31] will retry after 262.296498ms: waiting for machine to come up
	I0410 22:48:28.494117   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:28.494631   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:28.494660   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:28.494584   59155 retry.go:31] will retry after 237.287095ms: waiting for machine to come up
	I0410 22:48:28.733250   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:28.733795   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:28.733817   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:28.733755   59155 retry.go:31] will retry after 387.436239ms: waiting for machine to come up
	I0410 22:48:29.123585   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:29.124128   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:29.124163   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:29.124073   59155 retry.go:31] will retry after 428.418916ms: waiting for machine to come up
	I0410 22:48:29.554781   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:29.555244   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:29.555285   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:29.555235   59155 retry.go:31] will retry after 683.194159ms: waiting for machine to come up
	I0410 22:48:30.239955   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:30.240385   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:30.240463   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:30.240365   59155 retry.go:31] will retry after 764.240086ms: waiting for machine to come up
	I0410 22:48:31.006294   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:31.006789   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:31.006816   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:31.006750   59155 retry.go:31] will retry after 1.113674235s: waiting for machine to come up
	I0410 22:48:33.358026   57719 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.946092727s)
	I0410 22:48:33.358059   57719 crio.go:469] duration metric: took 2.946222933s to extract the tarball
	I0410 22:48:33.358069   57719 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0410 22:48:33.402924   57719 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:48:33.441006   57719 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0410 22:48:33.441033   57719 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0410 22:48:33.441090   57719 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:48:33.441142   57719 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.441203   57719 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.441210   57719 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.441318   57719 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0410 22:48:33.441339   57719 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.441375   57719 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.441395   57719 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.442645   57719 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.442667   57719 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.442706   57719 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.442717   57719 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0410 22:48:33.442796   57719 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.442807   57719 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.442814   57719 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:48:33.442866   57719 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.651119   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.652634   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.665548   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.669396   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.672510   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.674137   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0410 22:48:33.686915   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.756592   57719 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0410 22:48:33.756639   57719 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.756696   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.756696   57719 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0410 22:48:33.756789   57719 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.756810   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867043   57719 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0410 22:48:33.867061   57719 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0410 22:48:33.867090   57719 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.867091   57719 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.867135   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867166   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867185   57719 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0410 22:48:33.867220   57719 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.867252   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867261   57719 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0410 22:48:33.867303   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.867311   57719 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0410 22:48:33.867355   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867359   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.867286   57719 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0410 22:48:33.867452   57719 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.867481   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.871719   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.881086   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.964827   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.964854   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0410 22:48:33.964932   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0410 22:48:33.964948   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.976084   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0410 22:48:33.976155   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0410 22:48:33.976205   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0410 22:48:34.011460   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0410 22:48:34.038739   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0410 22:48:34.038739   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0410 22:48:34.289751   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:48:34.429542   57719 cache_images.go:92] duration metric: took 988.487885ms to LoadCachedImages
	W0410 22:48:34.429636   57719 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0410 22:48:34.429665   57719 kubeadm.go:928] updating node { 192.168.61.178 8443 v1.20.0 crio true true} ...
	I0410 22:48:34.429782   57719 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-862528 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.178
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-862528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 22:48:34.429870   57719 ssh_runner.go:195] Run: crio config
	I0410 22:48:34.478794   57719 cni.go:84] Creating CNI manager for ""
	I0410 22:48:34.478829   57719 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:48:34.478845   57719 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 22:48:34.478868   57719 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.178 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-862528 NodeName:old-k8s-version-862528 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.178"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.178 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0410 22:48:34.479065   57719 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.178
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-862528"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.178
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.178"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 22:48:34.479147   57719 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0410 22:48:34.489950   57719 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 22:48:34.490007   57719 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 22:48:34.500261   57719 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0410 22:48:34.517530   57719 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0410 22:48:34.534814   57719 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0410 22:48:34.552669   57719 ssh_runner.go:195] Run: grep 192.168.61.178	control-plane.minikube.internal$ /etc/hosts
	I0410 22:48:34.556612   57719 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.178	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:48:34.569643   57719 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:48:34.700791   57719 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:48:34.719682   57719 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528 for IP: 192.168.61.178
	I0410 22:48:34.719703   57719 certs.go:194] generating shared ca certs ...
	I0410 22:48:34.719722   57719 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:48:34.719900   57719 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 22:48:34.719951   57719 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 22:48:34.719965   57719 certs.go:256] generating profile certs ...
	I0410 22:48:34.720091   57719 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/client.key
	I0410 22:48:34.720155   57719 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.key.a46c310c
	I0410 22:48:34.720199   57719 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/proxy-client.key
	I0410 22:48:34.720337   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 22:48:34.720376   57719 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 22:48:34.720386   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 22:48:34.720438   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 22:48:34.720472   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 22:48:34.720502   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 22:48:34.720557   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:48:34.721238   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 22:48:34.769810   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 22:48:34.805397   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 22:48:34.846743   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 22:48:34.888720   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0410 22:48:34.915958   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0410 22:48:34.962182   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 22:48:34.992444   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0410 22:48:35.023525   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 22:48:35.051098   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 22:48:35.077305   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 22:48:35.102172   57719 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 22:48:35.121381   57719 ssh_runner.go:195] Run: openssl version
	I0410 22:48:35.127869   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 22:48:35.140056   57719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:35.145172   57719 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:35.145242   57719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:35.152081   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 22:48:35.164621   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 22:48:35.176511   57719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 22:48:35.182164   57719 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:48:35.182217   57719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 22:48:35.188968   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 22:48:35.201491   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 22:48:35.213468   57719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 22:48:35.218519   57719 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:48:35.218586   57719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 22:48:35.224872   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 22:48:35.236964   57719 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:48:35.242262   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0410 22:48:35.249245   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0410 22:48:35.256301   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0410 22:48:35.263359   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0410 22:48:35.270166   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0410 22:48:35.276953   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0410 22:48:35.283529   57719 kubeadm.go:391] StartCluster: {Name:old-k8s-version-862528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-862528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.178 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:48:35.283643   57719 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 22:48:35.283700   57719 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:48:35.328461   57719 cri.go:89] found id: ""
	I0410 22:48:35.328532   57719 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0410 22:48:35.340207   57719 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0410 22:48:35.340235   57719 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0410 22:48:35.340245   57719 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0410 22:48:35.340293   57719 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0410 22:48:35.351212   57719 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0410 22:48:35.352189   57719 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-862528" does not appear in /home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:48:35.352989   57719 kubeconfig.go:62] /home/jenkins/minikube-integration/18610-5679/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-862528" cluster setting kubeconfig missing "old-k8s-version-862528" context setting]
	I0410 22:48:35.353956   57719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/kubeconfig: {Name:mkd6f498bec10d7b0d2291f83f5e27766227bc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:48:32.122313   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:32.122773   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:32.122816   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:32.122717   59155 retry.go:31] will retry after 1.052378413s: waiting for machine to come up
	I0410 22:48:33.176207   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:33.176621   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:33.176665   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:33.176568   59155 retry.go:31] will retry after 1.548572633s: waiting for machine to come up
	I0410 22:48:34.726554   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:34.726992   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:34.727020   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:34.726938   59155 retry.go:31] will retry after 1.800911659s: waiting for machine to come up
	I0410 22:48:36.529629   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:36.530133   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:36.530164   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:36.530085   59155 retry.go:31] will retry after 2.434743044s: waiting for machine to come up
	I0410 22:48:35.428830   57719 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0410 22:48:35.479813   57719 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.178
	I0410 22:48:35.479853   57719 kubeadm.go:1154] stopping kube-system containers ...
	I0410 22:48:35.479882   57719 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0410 22:48:35.479940   57719 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:48:35.520506   57719 cri.go:89] found id: ""
	I0410 22:48:35.520577   57719 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0410 22:48:35.538167   57719 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:48:35.548571   57719 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:48:35.548600   57719 kubeadm.go:156] found existing configuration files:
	
	I0410 22:48:35.548662   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:48:35.558559   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:48:35.558612   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:48:35.568950   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:48:35.578644   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:48:35.578712   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:48:35.589075   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:48:35.600265   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:48:35.600321   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:48:35.611459   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:48:35.621712   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:48:35.621785   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:48:35.632133   57719 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:48:35.643494   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:35.775309   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:37.133286   57719 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.35793645s)
	I0410 22:48:37.133334   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:37.368687   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:37.497136   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:37.584652   57719 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:48:37.584744   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:38.085293   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:38.585489   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:39.084919   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:39.584951   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:40.085144   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:38.966866   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:38.967360   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:38.967383   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:38.967339   59155 retry.go:31] will retry after 3.219302627s: waiting for machine to come up
	I0410 22:48:40.585356   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:41.084839   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:41.585434   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:42.085797   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:42.585578   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:43.085621   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:43.585581   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:44.085251   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:44.584785   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:45.085394   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:46.409467   58701 start.go:364] duration metric: took 1m58.907071516s to acquireMachinesLock for "default-k8s-diff-port-519831"
	I0410 22:48:46.409536   58701 start.go:96] Skipping create...Using existing machine configuration
	I0410 22:48:46.409557   58701 fix.go:54] fixHost starting: 
	I0410 22:48:46.410030   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:48:46.410080   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:48:46.427877   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35587
	I0410 22:48:46.428357   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:48:46.428836   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:48:46.428858   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:48:46.429163   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:48:46.429354   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:48:46.429494   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetState
	I0410 22:48:46.431151   58701 fix.go:112] recreateIfNeeded on default-k8s-diff-port-519831: state=Stopped err=<nil>
	I0410 22:48:46.431192   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	W0410 22:48:46.431372   58701 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 22:48:46.433597   58701 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-519831" ...
	I0410 22:48:42.187835   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:42.188266   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:42.188305   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:42.188191   59155 retry.go:31] will retry after 2.924293511s: waiting for machine to come up
	I0410 22:48:45.113669   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.114211   58186 main.go:141] libmachine: (embed-certs-706500) Found IP for machine: 192.168.39.10
	I0410 22:48:45.114229   58186 main.go:141] libmachine: (embed-certs-706500) Reserving static IP address...
	I0410 22:48:45.114243   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has current primary IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.114685   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "embed-certs-706500", mac: "52:54:00:36:c4:8c", ip: "192.168.39.10"} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.114711   58186 main.go:141] libmachine: (embed-certs-706500) DBG | skip adding static IP to network mk-embed-certs-706500 - found existing host DHCP lease matching {name: "embed-certs-706500", mac: "52:54:00:36:c4:8c", ip: "192.168.39.10"}
	I0410 22:48:45.114721   58186 main.go:141] libmachine: (embed-certs-706500) Reserved static IP address: 192.168.39.10
	I0410 22:48:45.114728   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Getting to WaitForSSH function...
	I0410 22:48:45.114743   58186 main.go:141] libmachine: (embed-certs-706500) Waiting for SSH to be available...
	I0410 22:48:45.116708   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.116963   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.117007   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.117139   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Using SSH client type: external
	I0410 22:48:45.117167   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Using SSH private key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa (-rw-------)
	I0410 22:48:45.117198   58186 main.go:141] libmachine: (embed-certs-706500) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0410 22:48:45.117224   58186 main.go:141] libmachine: (embed-certs-706500) DBG | About to run SSH command:
	I0410 22:48:45.117236   58186 main.go:141] libmachine: (embed-certs-706500) DBG | exit 0
	I0410 22:48:45.240518   58186 main.go:141] libmachine: (embed-certs-706500) DBG | SSH cmd err, output: <nil>: 
	I0410 22:48:45.240843   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetConfigRaw
	I0410 22:48:45.241532   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetIP
	I0410 22:48:45.243908   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.244293   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.244317   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.244576   58186 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/config.json ...
	I0410 22:48:45.244775   58186 machine.go:94] provisionDockerMachine start ...
	I0410 22:48:45.244799   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:45.245004   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.247248   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.247639   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.247665   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.247859   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:45.248039   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.248217   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.248375   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:45.248543   58186 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:45.248746   58186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0410 22:48:45.248766   58186 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 22:48:45.357146   58186 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0410 22:48:45.357177   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetMachineName
	I0410 22:48:45.357428   58186 buildroot.go:166] provisioning hostname "embed-certs-706500"
	I0410 22:48:45.357447   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetMachineName
	I0410 22:48:45.357624   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.360299   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.360700   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.360796   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.360838   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:45.361049   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.361183   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.361367   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:45.361537   58186 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:45.361702   58186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0410 22:48:45.361716   58186 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-706500 && echo "embed-certs-706500" | sudo tee /etc/hostname
	I0410 22:48:45.487121   58186 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-706500
	
	I0410 22:48:45.487160   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.490242   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.490597   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.490625   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.490805   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:45.491004   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.491204   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.491359   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:45.491576   58186 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:45.491792   58186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0410 22:48:45.491824   58186 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-706500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-706500/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-706500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:48:45.606186   58186 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:48:45.606212   58186 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:48:45.606246   58186 buildroot.go:174] setting up certificates
	I0410 22:48:45.606257   58186 provision.go:84] configureAuth start
	I0410 22:48:45.606269   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetMachineName
	I0410 22:48:45.606594   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetIP
	I0410 22:48:45.609459   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.609893   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.609932   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.610134   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.612631   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.612945   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.612979   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.613144   58186 provision.go:143] copyHostCerts
	I0410 22:48:45.613193   58186 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:48:45.613207   58186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:48:45.613262   58186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:48:45.613378   58186 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:48:45.613393   58186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:48:45.613427   58186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:48:45.613495   58186 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:48:45.613505   58186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:48:45.613529   58186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:48:45.613592   58186 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.embed-certs-706500 san=[127.0.0.1 192.168.39.10 embed-certs-706500 localhost minikube]
	I0410 22:48:45.737049   58186 provision.go:177] copyRemoteCerts
	I0410 22:48:45.737105   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:48:45.737129   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.739712   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.740060   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.740089   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.740347   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:45.740589   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.740763   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:45.740957   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:48:45.828677   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:48:45.854080   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0410 22:48:45.878704   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0410 22:48:45.902611   58186 provision.go:87] duration metric: took 296.343353ms to configureAuth
	I0410 22:48:45.902640   58186 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:48:45.902879   58186 config.go:182] Loaded profile config "embed-certs-706500": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:48:45.902962   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.905588   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.905950   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.905972   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.906165   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:45.906360   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.906473   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.906561   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:45.906725   58186 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:45.906887   58186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0410 22:48:45.906911   58186 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:48:46.172772   58186 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:48:46.172807   58186 machine.go:97] duration metric: took 928.014662ms to provisionDockerMachine
	I0410 22:48:46.172823   58186 start.go:293] postStartSetup for "embed-certs-706500" (driver="kvm2")
	I0410 22:48:46.172836   58186 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:48:46.172877   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:46.173197   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:48:46.173223   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:46.176113   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.176465   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:46.176495   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.176679   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:46.176896   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:46.177118   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:46.177328   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:48:46.260470   58186 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:48:46.265003   58186 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:48:46.265030   58186 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:48:46.265088   58186 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:48:46.265158   58186 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:48:46.265241   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:48:46.274931   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:48:46.300036   58186 start.go:296] duration metric: took 127.199834ms for postStartSetup
	I0410 22:48:46.300082   58186 fix.go:56] duration metric: took 19.322550114s for fixHost
	I0410 22:48:46.300108   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:46.302945   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.303252   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:46.303279   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.303479   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:46.303700   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:46.303861   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:46.303990   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:46.304140   58186 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:46.304308   58186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0410 22:48:46.304318   58186 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 22:48:46.409294   58186 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712789326.385898055
	
	I0410 22:48:46.409317   58186 fix.go:216] guest clock: 1712789326.385898055
	I0410 22:48:46.409327   58186 fix.go:229] Guest: 2024-04-10 22:48:46.385898055 +0000 UTC Remote: 2024-04-10 22:48:46.300087658 +0000 UTC m=+229.287947250 (delta=85.810397ms)
	I0410 22:48:46.409352   58186 fix.go:200] guest clock delta is within tolerance: 85.810397ms
	I0410 22:48:46.409360   58186 start.go:83] releasing machines lock for "embed-certs-706500", held for 19.431860062s
	I0410 22:48:46.409389   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:46.409752   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetIP
	I0410 22:48:46.412201   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.412616   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:46.412651   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.412790   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:46.413361   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:46.413559   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:46.413617   58186 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:48:46.413665   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:46.413796   58186 ssh_runner.go:195] Run: cat /version.json
	I0410 22:48:46.413831   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:46.416879   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.417224   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:46.417248   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.417268   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.417428   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:46.417630   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:46.417811   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:46.417835   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:46.417858   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.417938   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:48:46.418030   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:46.418154   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:46.418284   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:46.418463   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:48:46.529204   58186 ssh_runner.go:195] Run: systemctl --version
	I0410 22:48:46.535396   58186 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:48:46.681100   58186 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 22:48:46.687278   58186 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:48:46.687340   58186 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:48:46.703105   58186 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0410 22:48:46.703128   58186 start.go:494] detecting cgroup driver to use...
	I0410 22:48:46.703191   58186 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:48:46.719207   58186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:48:46.733444   58186 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:48:46.733509   58186 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:48:46.747369   58186 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:48:46.762231   58186 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:48:46.874897   58186 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:48:47.023672   58186 docker.go:233] disabling docker service ...
	I0410 22:48:47.023749   58186 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:48:47.038963   58186 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:48:47.053827   58186 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:48:46.435268   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Start
	I0410 22:48:46.435498   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Ensuring networks are active...
	I0410 22:48:46.436266   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Ensuring network default is active
	I0410 22:48:46.436691   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Ensuring network mk-default-k8s-diff-port-519831 is active
	I0410 22:48:46.437163   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Getting domain xml...
	I0410 22:48:46.437799   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Creating domain...
	I0410 22:48:47.206641   58186 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:48:47.363331   58186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:48:47.380657   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:48:47.402234   58186 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0410 22:48:47.402306   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.419356   58186 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:48:47.419417   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.435320   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.450812   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.462588   58186 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:48:47.474323   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.494156   58186 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.515195   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.526148   58186 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:48:47.536045   58186 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0410 22:48:47.536106   58186 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0410 22:48:47.549556   58186 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:48:47.567236   58186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:48:47.702628   58186 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:48:47.848908   58186 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 22:48:47.849000   58186 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 22:48:47.854126   58186 start.go:562] Will wait 60s for crictl version
	I0410 22:48:47.854191   58186 ssh_runner.go:195] Run: which crictl
	I0410 22:48:47.858095   58186 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 22:48:47.897714   58186 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 22:48:47.897805   58186 ssh_runner.go:195] Run: crio --version
	I0410 22:48:47.927597   58186 ssh_runner.go:195] Run: crio --version
	I0410 22:48:47.958357   58186 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0410 22:48:45.584769   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:46.085396   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:46.585857   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:47.085186   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:47.585668   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:48.085585   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:48.585617   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:49.085227   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:49.585626   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:50.084900   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:47.959811   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetIP
	I0410 22:48:47.962805   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:47.963246   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:47.963276   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:47.963510   58186 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0410 22:48:47.967753   58186 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:48:47.981154   58186 kubeadm.go:877] updating cluster {Name:embed-certs-706500 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-706500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 22:48:47.981258   58186 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 22:48:47.981298   58186 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:48:48.018208   58186 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0410 22:48:48.018274   58186 ssh_runner.go:195] Run: which lz4
	I0410 22:48:48.023613   58186 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0410 22:48:48.029036   58186 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0410 22:48:48.029063   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0410 22:48:49.637729   58186 crio.go:462] duration metric: took 1.61414003s to copy over tarball
	I0410 22:48:49.637796   58186 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0410 22:48:52.046454   58186 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.408634496s)
	I0410 22:48:52.046482   58186 crio.go:469] duration metric: took 2.408728343s to extract the tarball
	I0410 22:48:52.046489   58186 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0410 22:48:47.701355   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting to get IP...
	I0410 22:48:47.702406   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:47.702994   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:47.703067   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:47.702962   59362 retry.go:31] will retry after 292.834608ms: waiting for machine to come up
	I0410 22:48:47.997294   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:47.997757   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:47.997785   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:47.997701   59362 retry.go:31] will retry after 341.35168ms: waiting for machine to come up
	I0410 22:48:48.340842   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:48.341347   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:48.341379   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:48.341279   59362 retry.go:31] will retry after 438.041848ms: waiting for machine to come up
	I0410 22:48:48.780565   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:48.781092   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:48.781116   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:48.781038   59362 retry.go:31] will retry after 557.770882ms: waiting for machine to come up
	I0410 22:48:49.340858   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:49.341330   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:49.341354   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:49.341282   59362 retry.go:31] will retry after 637.316206ms: waiting for machine to come up
	I0410 22:48:49.980256   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:49.980737   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:49.980761   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:49.980696   59362 retry.go:31] will retry after 909.873955ms: waiting for machine to come up
	I0410 22:48:50.891776   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:50.892197   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:50.892229   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:50.892147   59362 retry.go:31] will retry after 745.06949ms: waiting for machine to come up
	I0410 22:48:51.638436   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:51.638907   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:51.638933   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:51.638854   59362 retry.go:31] will retry after 1.060037191s: waiting for machine to come up
	I0410 22:48:50.585691   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:51.085669   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:51.585308   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:52.085393   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:52.585619   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:53.085643   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:53.585076   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:54.085251   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:54.585027   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:55.085629   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:52.087135   58186 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:48:52.139368   58186 crio.go:514] all images are preloaded for cri-o runtime.
	I0410 22:48:52.139389   58186 cache_images.go:84] Images are preloaded, skipping loading
	I0410 22:48:52.139397   58186 kubeadm.go:928] updating node { 192.168.39.10 8443 v1.29.3 crio true true} ...
	I0410 22:48:52.139535   58186 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-706500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-706500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 22:48:52.139629   58186 ssh_runner.go:195] Run: crio config
	I0410 22:48:52.193347   58186 cni.go:84] Creating CNI manager for ""
	I0410 22:48:52.193375   58186 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:48:52.193390   58186 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 22:48:52.193429   58186 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-706500 NodeName:embed-certs-706500 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0410 22:48:52.193606   58186 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-706500"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 22:48:52.193686   58186 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0410 22:48:52.206450   58186 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 22:48:52.206507   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 22:48:52.218898   58186 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0410 22:48:52.239285   58186 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0410 22:48:52.257083   58186 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0410 22:48:52.275448   58186 ssh_runner.go:195] Run: grep 192.168.39.10	control-plane.minikube.internal$ /etc/hosts
	I0410 22:48:52.279486   58186 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:48:52.293308   58186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:48:52.428424   58186 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:48:52.446713   58186 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500 for IP: 192.168.39.10
	I0410 22:48:52.446738   58186 certs.go:194] generating shared ca certs ...
	I0410 22:48:52.446759   58186 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:48:52.446937   58186 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 22:48:52.446980   58186 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 22:48:52.446990   58186 certs.go:256] generating profile certs ...
	I0410 22:48:52.447059   58186 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/client.key
	I0410 22:48:52.447124   58186 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/apiserver.key.f3045f1a
	I0410 22:48:52.447156   58186 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/proxy-client.key
	I0410 22:48:52.447294   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 22:48:52.447328   58186 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 22:48:52.447335   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 22:48:52.447354   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 22:48:52.447374   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 22:48:52.447405   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 22:48:52.447457   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:48:52.448166   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 22:48:52.481862   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 22:48:52.530983   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 22:48:52.572191   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 22:48:52.614466   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0410 22:48:52.644331   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0410 22:48:52.672811   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 22:48:52.698376   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0410 22:48:52.723998   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 22:48:52.749405   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 22:48:52.777529   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 22:48:52.803663   58186 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 22:48:52.822234   58186 ssh_runner.go:195] Run: openssl version
	I0410 22:48:52.830835   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 22:48:52.843425   58186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 22:48:52.848384   58186 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:48:52.848444   58186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 22:48:52.854869   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 22:48:52.867228   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 22:48:52.879319   58186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 22:48:52.884241   58186 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:48:52.884324   58186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 22:48:52.890349   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 22:48:52.902398   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 22:48:52.913996   58186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:52.918757   58186 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:52.918824   58186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:52.924669   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 22:48:52.936581   58186 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:48:52.941242   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0410 22:48:52.947526   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0410 22:48:52.953939   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0410 22:48:52.960447   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0410 22:48:52.966829   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0410 22:48:52.973148   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0410 22:48:52.979557   58186 kubeadm.go:391] StartCluster: {Name:embed-certs-706500 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-706500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:48:52.979669   58186 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 22:48:52.979744   58186 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:48:53.018394   58186 cri.go:89] found id: ""
	I0410 22:48:53.018479   58186 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0410 22:48:53.030088   58186 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0410 22:48:53.030112   58186 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0410 22:48:53.030118   58186 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0410 22:48:53.030184   58186 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0410 22:48:53.041035   58186 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0410 22:48:53.042312   58186 kubeconfig.go:125] found "embed-certs-706500" server: "https://192.168.39.10:8443"
	I0410 22:48:53.044306   58186 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0410 22:48:53.054911   58186 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.10
	I0410 22:48:53.054948   58186 kubeadm.go:1154] stopping kube-system containers ...
	I0410 22:48:53.054974   58186 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0410 22:48:53.055020   58186 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:48:53.093035   58186 cri.go:89] found id: ""
	I0410 22:48:53.093109   58186 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0410 22:48:53.111257   58186 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:48:53.122098   58186 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:48:53.122125   58186 kubeadm.go:156] found existing configuration files:
	
	I0410 22:48:53.122176   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:48:53.133513   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:48:53.133587   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:48:53.144275   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:48:53.154921   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:48:53.155000   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:48:53.165604   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:48:53.175520   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:48:53.175582   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:48:53.186094   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:48:53.196086   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:48:53.196156   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:48:53.206564   58186 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:48:53.217180   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:53.336883   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:54.151708   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:54.367165   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:54.457694   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:54.572579   58186 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:48:54.572693   58186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:55.073196   58186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:55.572865   58186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:55.595374   58186 api_server.go:72] duration metric: took 1.022777759s to wait for apiserver process to appear ...
	I0410 22:48:55.595403   58186 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:48:55.595424   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:48:52.701137   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:52.701574   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:52.701606   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:52.701529   59362 retry.go:31] will retry after 1.792719263s: waiting for machine to come up
	I0410 22:48:54.496380   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:54.496793   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:54.496823   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:54.496740   59362 retry.go:31] will retry after 2.321115222s: waiting for machine to come up
	I0410 22:48:56.819654   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:56.820107   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:56.820140   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:56.820072   59362 retry.go:31] will retry after 2.57309135s: waiting for machine to come up
	I0410 22:48:55.585506   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:56.084892   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:56.585876   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:57.085775   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:57.585260   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:58.085635   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:58.585588   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:59.085661   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:59.585663   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:00.085635   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:58.843447   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0410 22:48:58.843487   58186 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0410 22:48:58.843504   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:48:58.962381   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:48:58.962431   58186 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:48:59.095611   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:48:59.100754   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:48:59.100781   58186 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:48:59.595968   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:48:59.606936   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:48:59.606977   58186 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:49:00.096182   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:49:00.106346   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:49:00.106388   58186 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:49:00.595923   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:49:00.600197   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I0410 22:49:00.609220   58186 api_server.go:141] control plane version: v1.29.3
	I0410 22:49:00.609246   58186 api_server.go:131] duration metric: took 5.013835577s to wait for apiserver health ...
	I0410 22:49:00.609256   58186 cni.go:84] Creating CNI manager for ""
	I0410 22:49:00.609263   58186 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:49:00.611220   58186 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0410 22:49:00.612765   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0410 22:49:00.625567   58186 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0410 22:49:00.648581   58186 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:49:00.657652   58186 system_pods.go:59] 8 kube-system pods found
	I0410 22:49:00.657688   58186 system_pods.go:61] "coredns-76f75df574-j4kj8" [1986e6b6-e6c7-4212-bdd5-10360a0b897c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0410 22:49:00.657696   58186 system_pods.go:61] "etcd-embed-certs-706500" [acbf9245-d4f8-4fa6-88a7-4f891f9f8403] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0410 22:49:00.657704   58186 system_pods.go:61] "kube-apiserver-embed-certs-706500" [b9c79d1d-f571-4ed8-a68f-512e8a2a1705] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0410 22:49:00.657709   58186 system_pods.go:61] "kube-controller-manager-embed-certs-706500" [d229b85d-9a8d-4cd0-ac48-a6aea3769581] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0410 22:49:00.657715   58186 system_pods.go:61] "kube-proxy-8kzff" [ce35a33f-1697-44a7-ad64-83895236bc6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0410 22:49:00.657720   58186 system_pods.go:61] "kube-scheduler-embed-certs-706500" [72c68a6c-beba-48a5-937b-51c40aab0386] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0410 22:49:00.657726   58186 system_pods.go:61] "metrics-server-57f55c9bc5-4r9pl" [40a91fc1-9e0a-4bcc-a2e9-65e9f2d2b960] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:49:00.657733   58186 system_pods.go:61] "storage-provisioner" [10f7637e-e6e0-4f04-b1eb-ac3bd205064f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0410 22:49:00.657742   58186 system_pods.go:74] duration metric: took 9.141859ms to wait for pod list to return data ...
	I0410 22:49:00.657752   58186 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:49:00.662255   58186 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:49:00.662300   58186 node_conditions.go:123] node cpu capacity is 2
	I0410 22:49:00.662315   58186 node_conditions.go:105] duration metric: took 4.553643ms to run NodePressure ...
	I0410 22:49:00.662338   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:00.957923   58186 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0410 22:49:00.962553   58186 kubeadm.go:733] kubelet initialised
	I0410 22:49:00.962575   58186 kubeadm.go:734] duration metric: took 4.616848ms waiting for restarted kubelet to initialise ...
	I0410 22:49:00.962585   58186 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:49:00.968387   58186 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-j4kj8" in "kube-system" namespace to be "Ready" ...
	I0410 22:48:59.395416   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:59.395864   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:59.395893   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:59.395819   59362 retry.go:31] will retry after 2.378137008s: waiting for machine to come up
	I0410 22:49:01.776037   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:01.776587   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:49:01.776641   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:49:01.776526   59362 retry.go:31] will retry after 4.360839049s: waiting for machine to come up
	I0410 22:49:00.585234   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:01.084884   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:01.585066   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:02.085697   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:02.585573   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:03.085552   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:03.585521   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:04.084919   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:04.584802   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:05.085266   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:02.975009   58186 pod_ready.go:102] pod "coredns-76f75df574-j4kj8" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:04.976854   58186 pod_ready.go:102] pod "coredns-76f75df574-j4kj8" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:06.141509   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.142008   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Found IP for machine: 192.168.72.170
	I0410 22:49:06.142037   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has current primary IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.142047   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Reserving static IP address...
	I0410 22:49:06.142422   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Reserved static IP address: 192.168.72.170
	I0410 22:49:06.142451   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for SSH to be available...
	I0410 22:49:06.142476   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-519831", mac: "52:54:00:dc:67:d5", ip: "192.168.72.170"} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.142499   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | skip adding static IP to network mk-default-k8s-diff-port-519831 - found existing host DHCP lease matching {name: "default-k8s-diff-port-519831", mac: "52:54:00:dc:67:d5", ip: "192.168.72.170"}
	I0410 22:49:06.142518   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Getting to WaitForSSH function...
	I0410 22:49:06.144878   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.145206   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.145238   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.145326   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Using SSH client type: external
	I0410 22:49:06.145365   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Using SSH private key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa (-rw-------)
	I0410 22:49:06.145401   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.170 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0410 22:49:06.145421   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | About to run SSH command:
	I0410 22:49:06.145438   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | exit 0
	I0410 22:49:06.272546   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | SSH cmd err, output: <nil>: 
	I0410 22:49:06.272919   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetConfigRaw
	I0410 22:49:06.273605   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetIP
	I0410 22:49:06.276234   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.276610   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.276644   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.276851   58701 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/config.json ...
	I0410 22:49:06.277100   58701 machine.go:94] provisionDockerMachine start ...
	I0410 22:49:06.277127   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:06.277400   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:06.279729   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.280107   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.280146   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.280295   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:06.280480   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.280658   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.280794   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:06.280939   58701 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:06.281121   58701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0410 22:49:06.281138   58701 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 22:49:06.385219   58701 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0410 22:49:06.385254   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetMachineName
	I0410 22:49:06.385498   58701 buildroot.go:166] provisioning hostname "default-k8s-diff-port-519831"
	I0410 22:49:06.385527   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetMachineName
	I0410 22:49:06.385716   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:06.388422   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.388922   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.388963   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.389072   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:06.389292   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.389462   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.389600   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:06.389751   58701 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:06.389924   58701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0410 22:49:06.389938   58701 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-519831 && echo "default-k8s-diff-port-519831" | sudo tee /etc/hostname
	I0410 22:49:06.507221   58701 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-519831
	
	I0410 22:49:06.507252   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:06.509837   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.510179   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.510225   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.510385   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:06.510561   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.510736   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.510880   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:06.511040   58701 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:06.511236   58701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0410 22:49:06.511262   58701 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-519831' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-519831/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-519831' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:49:06.626097   58701 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:49:06.626129   58701 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:49:06.626153   58701 buildroot.go:174] setting up certificates
	I0410 22:49:06.626163   58701 provision.go:84] configureAuth start
	I0410 22:49:06.626173   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetMachineName
	I0410 22:49:06.626499   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetIP
	I0410 22:49:06.629067   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.629412   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.629450   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.629559   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:06.632132   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.632517   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.632548   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.632674   58701 provision.go:143] copyHostCerts
	I0410 22:49:06.632734   58701 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:49:06.632755   58701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:49:06.632822   58701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:49:06.633021   58701 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:49:06.633037   58701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:49:06.633078   58701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:49:06.633179   58701 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:49:06.633191   58701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:49:06.633223   58701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:49:06.633295   58701 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-519831 san=[127.0.0.1 192.168.72.170 default-k8s-diff-port-519831 localhost minikube]
	I0410 22:49:06.835016   58701 provision.go:177] copyRemoteCerts
	I0410 22:49:06.835077   58701 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:49:06.835104   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:06.837769   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.838124   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.838152   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.838327   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:06.838519   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.838669   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:06.838808   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:06.921929   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:49:06.947855   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0410 22:49:06.972865   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0410 22:49:06.999630   58701 provision.go:87] duration metric: took 373.45654ms to configureAuth
	I0410 22:49:06.999658   58701 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:49:06.999872   58701 config.go:182] Loaded profile config "default-k8s-diff-port-519831": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:49:06.999942   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:07.003015   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.003418   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.003452   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.003623   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:07.003793   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.003946   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.004062   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:07.004208   58701 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:07.004425   58701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0410 22:49:07.004448   58701 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:49:07.273568   58701 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:49:07.273601   58701 machine.go:97] duration metric: took 996.483382ms to provisionDockerMachine
	I0410 22:49:07.273618   58701 start.go:293] postStartSetup for "default-k8s-diff-port-519831" (driver="kvm2")
	I0410 22:49:07.273634   58701 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:49:07.273660   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:07.274009   58701 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:49:07.274040   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:07.276736   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.277132   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.277155   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.277354   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:07.277537   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.277740   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:07.277891   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:07.361056   58701 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:49:07.365729   58701 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:49:07.365759   58701 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:49:07.365834   58701 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:49:07.365935   58701 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:49:07.366064   58701 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:49:07.376754   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:49:07.509384   57270 start.go:364] duration metric: took 56.035567079s to acquireMachinesLock for "no-preload-646133"
	I0410 22:49:07.509424   57270 start.go:96] Skipping create...Using existing machine configuration
	I0410 22:49:07.509432   57270 fix.go:54] fixHost starting: 
	I0410 22:49:07.509837   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:07.509872   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:07.526882   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46721
	I0410 22:49:07.527337   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:07.527780   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:49:07.527801   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:07.528077   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:07.528238   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:07.528366   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetState
	I0410 22:49:07.529732   57270 fix.go:112] recreateIfNeeded on no-preload-646133: state=Stopped err=<nil>
	I0410 22:49:07.529755   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	W0410 22:49:07.529878   57270 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 22:49:07.531875   57270 out.go:177] * Restarting existing kvm2 VM for "no-preload-646133" ...
	I0410 22:49:07.402691   58701 start.go:296] duration metric: took 129.059293ms for postStartSetup
	I0410 22:49:07.402731   58701 fix.go:56] duration metric: took 20.99318672s for fixHost
	I0410 22:49:07.402751   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:07.405634   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.405955   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.405996   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.406161   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:07.406378   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.406537   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.406647   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:07.406826   58701 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:07.407062   58701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0410 22:49:07.407079   58701 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 22:49:07.509210   58701 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712789347.471050157
	
	I0410 22:49:07.509233   58701 fix.go:216] guest clock: 1712789347.471050157
	I0410 22:49:07.509241   58701 fix.go:229] Guest: 2024-04-10 22:49:07.471050157 +0000 UTC Remote: 2024-04-10 22:49:07.402735415 +0000 UTC m=+140.054227768 (delta=68.314742ms)
	I0410 22:49:07.509287   58701 fix.go:200] guest clock delta is within tolerance: 68.314742ms
	I0410 22:49:07.509297   58701 start.go:83] releasing machines lock for "default-k8s-diff-port-519831", held for 21.099785205s
	I0410 22:49:07.509328   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:07.509613   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetIP
	I0410 22:49:07.512255   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.512634   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.512667   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.512827   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:07.513364   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:07.513531   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:07.513610   58701 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:49:07.513649   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:07.513750   58701 ssh_runner.go:195] Run: cat /version.json
	I0410 22:49:07.513771   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:07.516338   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.516685   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.516776   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.516802   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.516951   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:07.517142   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.517161   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.517173   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.517310   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:07.517355   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:07.517470   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.517602   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:07.517604   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:07.517765   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:07.594218   58701 ssh_runner.go:195] Run: systemctl --version
	I0410 22:49:07.633783   58701 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:49:07.790430   58701 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 22:49:07.797279   58701 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:49:07.797358   58701 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:49:07.815457   58701 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0410 22:49:07.815488   58701 start.go:494] detecting cgroup driver to use...
	I0410 22:49:07.815561   58701 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:49:07.833038   58701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:49:07.848577   58701 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:49:07.848648   58701 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:49:07.863609   58701 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:49:07.878299   58701 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:49:07.999388   58701 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:49:08.155534   58701 docker.go:233] disabling docker service ...
	I0410 22:49:08.155613   58701 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:49:08.175545   58701 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:49:08.195923   58701 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:49:08.340282   58701 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:49:08.485647   58701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:49:08.500245   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:49:08.520493   58701 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0410 22:49:08.520582   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.535455   58701 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:49:08.535521   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.547058   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.559638   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.571374   58701 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:49:08.583796   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.598091   58701 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.622634   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.633858   58701 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:49:08.645114   58701 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0410 22:49:08.645167   58701 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0410 22:49:08.660204   58701 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:49:08.671345   58701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:49:08.804523   58701 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:49:08.953644   58701 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 22:49:08.953717   58701 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 22:49:08.958661   58701 start.go:562] Will wait 60s for crictl version
	I0410 22:49:08.958715   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:49:08.962938   58701 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 22:49:09.006335   58701 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 22:49:09.006425   58701 ssh_runner.go:195] Run: crio --version
	I0410 22:49:09.037315   58701 ssh_runner.go:195] Run: crio --version
	I0410 22:49:09.069366   58701 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0410 22:49:07.533174   57270 main.go:141] libmachine: (no-preload-646133) Calling .Start
	I0410 22:49:07.533352   57270 main.go:141] libmachine: (no-preload-646133) Ensuring networks are active...
	I0410 22:49:07.534117   57270 main.go:141] libmachine: (no-preload-646133) Ensuring network default is active
	I0410 22:49:07.534413   57270 main.go:141] libmachine: (no-preload-646133) Ensuring network mk-no-preload-646133 is active
	I0410 22:49:07.534851   57270 main.go:141] libmachine: (no-preload-646133) Getting domain xml...
	I0410 22:49:07.535553   57270 main.go:141] libmachine: (no-preload-646133) Creating domain...
	I0410 22:49:08.844990   57270 main.go:141] libmachine: (no-preload-646133) Waiting to get IP...
	I0410 22:49:08.845908   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:08.846363   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:08.846459   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:08.846332   59513 retry.go:31] will retry after 241.150391ms: waiting for machine to come up
	I0410 22:49:09.088961   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:09.089455   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:09.089489   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:09.089417   59513 retry.go:31] will retry after 349.96397ms: waiting for machine to come up
	I0410 22:49:09.441226   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:09.441799   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:09.441828   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:09.441754   59513 retry.go:31] will retry after 444.576999ms: waiting for machine to come up
	I0410 22:49:05.585408   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:06.085250   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:06.585503   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:07.085422   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:07.584909   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:08.084863   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:08.585859   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:09.085175   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:09.585660   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:10.085221   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:07.475385   58186 pod_ready.go:92] pod "coredns-76f75df574-j4kj8" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:07.475414   58186 pod_ready.go:81] duration metric: took 6.506993581s for pod "coredns-76f75df574-j4kj8" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:07.475424   58186 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:09.486133   58186 pod_ready.go:102] pod "etcd-embed-certs-706500" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:11.483972   58186 pod_ready.go:92] pod "etcd-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:11.483994   58186 pod_ready.go:81] duration metric: took 4.008564427s for pod "etcd-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.484005   58186 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.490340   58186 pod_ready.go:92] pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:11.490380   58186 pod_ready.go:81] duration metric: took 6.362017ms for pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.490399   58186 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.497078   58186 pod_ready.go:92] pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:11.497110   58186 pod_ready.go:81] duration metric: took 6.701645ms for pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.497124   58186 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8kzff" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.504091   58186 pod_ready.go:92] pod "kube-proxy-8kzff" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:11.504118   58186 pod_ready.go:81] duration metric: took 6.985136ms for pod "kube-proxy-8kzff" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.504132   58186 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.510619   58186 pod_ready.go:92] pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:11.510656   58186 pod_ready.go:81] duration metric: took 6.513031ms for pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.510674   58186 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:09.070592   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetIP
	I0410 22:49:09.073850   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:09.074163   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:09.074190   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:09.074388   58701 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0410 22:49:09.079170   58701 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:49:09.093764   58701 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-519831 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-519831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.170 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 22:49:09.093973   58701 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 22:49:09.094040   58701 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:49:09.140874   58701 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0410 22:49:09.140951   58701 ssh_runner.go:195] Run: which lz4
	I0410 22:49:09.146775   58701 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0410 22:49:09.152876   58701 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0410 22:49:09.152917   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0410 22:49:10.827934   58701 crio.go:462] duration metric: took 1.681191787s to copy over tarball
	I0410 22:49:10.828019   58701 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0410 22:49:09.888688   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:09.892576   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:09.892607   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:09.889179   59513 retry.go:31] will retry after 560.585608ms: waiting for machine to come up
	I0410 22:49:10.451001   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:10.451630   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:10.451663   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:10.451590   59513 retry.go:31] will retry after 601.519186ms: waiting for machine to come up
	I0410 22:49:11.054324   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:11.054664   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:11.054693   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:11.054653   59513 retry.go:31] will retry after 750.183717ms: waiting for machine to come up
	I0410 22:49:11.805908   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:11.806303   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:11.806331   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:11.806254   59513 retry.go:31] will retry after 883.805148ms: waiting for machine to come up
	I0410 22:49:12.691316   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:12.691861   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:12.691893   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:12.691804   59513 retry.go:31] will retry after 1.39605629s: waiting for machine to come up
	I0410 22:49:14.090350   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:14.090795   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:14.090821   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:14.090753   59513 retry.go:31] will retry after 1.388324423s: waiting for machine to come up
	I0410 22:49:10.585333   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:11.085466   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:11.585062   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:12.085191   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:12.585644   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:13.085615   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:13.585355   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:14.085270   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:14.584868   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:15.085639   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:13.521844   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:16.041569   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:13.328492   58701 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.500439721s)
	I0410 22:49:13.328534   58701 crio.go:469] duration metric: took 2.500564923s to extract the tarball
	I0410 22:49:13.328545   58701 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0410 22:49:13.367568   58701 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:49:13.415759   58701 crio.go:514] all images are preloaded for cri-o runtime.
	I0410 22:49:13.415780   58701 cache_images.go:84] Images are preloaded, skipping loading
	I0410 22:49:13.415788   58701 kubeadm.go:928] updating node { 192.168.72.170 8444 v1.29.3 crio true true} ...
	I0410 22:49:13.415899   58701 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-519831 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-519831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 22:49:13.415982   58701 ssh_runner.go:195] Run: crio config
	I0410 22:49:13.473019   58701 cni.go:84] Creating CNI manager for ""
	I0410 22:49:13.473046   58701 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:49:13.473063   58701 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 22:49:13.473100   58701 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.170 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-519831 NodeName:default-k8s-diff-port-519831 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0410 22:49:13.473261   58701 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.170
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-519831"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.170
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.170"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 22:49:13.473325   58701 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0410 22:49:13.487302   58701 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 22:49:13.487368   58701 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 22:49:13.498496   58701 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0410 22:49:13.518312   58701 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0410 22:49:13.537972   58701 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0410 22:49:13.558714   58701 ssh_runner.go:195] Run: grep 192.168.72.170	control-plane.minikube.internal$ /etc/hosts
	I0410 22:49:13.562886   58701 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.170	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:49:13.575957   58701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:49:13.706316   58701 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:49:13.725898   58701 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831 for IP: 192.168.72.170
	I0410 22:49:13.725924   58701 certs.go:194] generating shared ca certs ...
	I0410 22:49:13.725944   58701 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:49:13.726119   58701 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 22:49:13.726173   58701 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 22:49:13.726185   58701 certs.go:256] generating profile certs ...
	I0410 22:49:13.726297   58701 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/client.key
	I0410 22:49:13.726398   58701 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/apiserver.key.ff579077
	I0410 22:49:13.726454   58701 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/proxy-client.key
	I0410 22:49:13.726606   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 22:49:13.726644   58701 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 22:49:13.726656   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 22:49:13.726685   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 22:49:13.726725   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 22:49:13.726756   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 22:49:13.726811   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:49:13.727747   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 22:49:13.780060   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 22:49:13.818446   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 22:49:13.865986   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 22:49:13.897578   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0410 22:49:13.937123   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0410 22:49:13.970558   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 22:49:13.997678   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0410 22:49:14.025173   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 22:49:14.051190   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 22:49:14.079109   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 22:49:14.107547   58701 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 22:49:14.128029   58701 ssh_runner.go:195] Run: openssl version
	I0410 22:49:14.134686   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 22:49:14.148733   58701 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:14.154057   58701 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:14.154114   58701 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:14.160626   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 22:49:14.174406   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 22:49:14.187513   58701 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 22:49:14.193279   58701 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:49:14.193344   58701 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 22:49:14.199518   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 22:49:14.213538   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 22:49:14.225618   58701 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 22:49:14.230610   58701 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:49:14.230666   58701 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 22:49:14.236756   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 22:49:14.250041   58701 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:49:14.255320   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0410 22:49:14.262821   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0410 22:49:14.268854   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0410 22:49:14.275152   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0410 22:49:14.281598   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0410 22:49:14.287895   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0410 22:49:14.294125   58701 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-519831 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-519831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.170 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:49:14.294246   58701 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 22:49:14.294301   58701 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:49:14.332192   58701 cri.go:89] found id: ""
	I0410 22:49:14.332268   58701 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0410 22:49:14.343174   58701 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0410 22:49:14.343198   58701 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0410 22:49:14.343205   58701 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0410 22:49:14.343261   58701 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0410 22:49:14.355648   58701 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0410 22:49:14.357310   58701 kubeconfig.go:125] found "default-k8s-diff-port-519831" server: "https://192.168.72.170:8444"
	I0410 22:49:14.360713   58701 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0410 22:49:14.371972   58701 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.170
	I0410 22:49:14.372011   58701 kubeadm.go:1154] stopping kube-system containers ...
	I0410 22:49:14.372025   58701 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0410 22:49:14.372083   58701 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:49:14.410517   58701 cri.go:89] found id: ""
	I0410 22:49:14.410594   58701 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0410 22:49:14.428686   58701 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:49:14.443256   58701 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:49:14.443281   58701 kubeadm.go:156] found existing configuration files:
	
	I0410 22:49:14.443353   58701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0410 22:49:14.455086   58701 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:49:14.455156   58701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:49:14.466151   58701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0410 22:49:14.476799   58701 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:49:14.476852   58701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:49:14.487588   58701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0410 22:49:14.498476   58701 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:49:14.498534   58701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:49:14.509248   58701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0410 22:49:14.520223   58701 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:49:14.520287   58701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:49:14.531388   58701 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:49:14.542775   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:14.673733   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:15.773338   58701 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.099570437s)
	I0410 22:49:15.773385   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:15.985355   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:16.052996   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:16.126251   58701 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:49:16.126362   58701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:16.626615   58701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:17.127289   58701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:17.166269   58701 api_server.go:72] duration metric: took 1.040013076s to wait for apiserver process to appear ...
	I0410 22:49:17.166315   58701 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:49:17.166339   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:17.166964   58701 api_server.go:269] stopped: https://192.168.72.170:8444/healthz: Get "https://192.168.72.170:8444/healthz": dial tcp 192.168.72.170:8444: connect: connection refused
	I0410 22:49:15.480947   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:15.481358   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:15.481386   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:15.481309   59513 retry.go:31] will retry after 2.276682979s: waiting for machine to come up
	I0410 22:49:17.759404   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:17.759931   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:17.759975   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:17.759887   59513 retry.go:31] will retry after 2.254373826s: waiting for machine to come up
	I0410 22:49:15.585476   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:16.085404   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:16.585123   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:17.085713   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:17.584877   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:18.085601   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:18.585222   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:19.084891   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:19.585215   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:20.085668   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:18.519156   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:20.520053   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:17.667248   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:20.709507   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0410 22:49:20.709538   58701 api_server.go:103] status: https://192.168.72.170:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0410 22:49:20.709554   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:20.740392   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:49:20.740483   58701 api_server.go:103] status: https://192.168.72.170:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:49:21.166658   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:21.174343   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:49:21.174378   58701 api_server.go:103] status: https://192.168.72.170:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:49:21.667345   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:21.685078   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:49:21.685112   58701 api_server.go:103] status: https://192.168.72.170:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:49:22.166644   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:22.171611   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 200:
	ok
	I0410 22:49:22.178452   58701 api_server.go:141] control plane version: v1.29.3
	I0410 22:49:22.178484   58701 api_server.go:131] duration metric: took 5.012161431s to wait for apiserver health ...
	I0410 22:49:22.178493   58701 cni.go:84] Creating CNI manager for ""
	I0410 22:49:22.178499   58701 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:49:22.180370   58701 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0410 22:49:22.181768   58701 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0410 22:49:22.197462   58701 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0410 22:49:22.218348   58701 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:49:22.236800   58701 system_pods.go:59] 8 kube-system pods found
	I0410 22:49:22.236830   58701 system_pods.go:61] "coredns-76f75df574-ghnvx" [88ebd9b0-ecf0-4037-b5b0-547dad2354ba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0410 22:49:22.236837   58701 system_pods.go:61] "etcd-default-k8s-diff-port-519831" [e03c3075-b377-41a8-8f68-5a424fafd6a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0410 22:49:22.236843   58701 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-519831" [a538137b-08ce-4feb-a420-2e3ad7125b14] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0410 22:49:22.236849   58701 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-519831" [05d154e2-69cf-40dc-a4ba-ed0d65be4365] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0410 22:49:22.236861   58701 system_pods.go:61] "kube-proxy-5mbwx" [44724487-9539-4079-9fd6-40cb70208b95] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0410 22:49:22.236866   58701 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-519831" [a587ef20-140c-40b0-b306-d5f5c595f4a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0410 22:49:22.236871   58701 system_pods.go:61] "metrics-server-57f55c9bc5-9l2hc" [2f5cda2f-4d8f-4798-954e-5ef588f2b88f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:49:22.236876   58701 system_pods.go:61] "storage-provisioner" [e4e09f42-54ba-480e-a020-1ca071a54558] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0410 22:49:22.236884   58701 system_pods.go:74] duration metric: took 18.510987ms to wait for pod list to return data ...
	I0410 22:49:22.236893   58701 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:49:22.242143   58701 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:49:22.242167   58701 node_conditions.go:123] node cpu capacity is 2
	I0410 22:49:22.242177   58701 node_conditions.go:105] duration metric: took 5.279415ms to run NodePressure ...
	I0410 22:49:22.242192   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:22.532741   58701 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0410 22:49:22.537418   58701 kubeadm.go:733] kubelet initialised
	I0410 22:49:22.537444   58701 kubeadm.go:734] duration metric: took 4.675489ms waiting for restarted kubelet to initialise ...
	I0410 22:49:22.537453   58701 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:49:22.543364   58701 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-ghnvx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:22.549161   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "coredns-76f75df574-ghnvx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.549186   58701 pod_ready.go:81] duration metric: took 5.796619ms for pod "coredns-76f75df574-ghnvx" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:22.549196   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "coredns-76f75df574-ghnvx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.549207   58701 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:22.554131   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.554156   58701 pod_ready.go:81] duration metric: took 4.941026ms for pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:22.554165   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.554172   58701 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:22.558783   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.558812   58701 pod_ready.go:81] duration metric: took 4.633262ms for pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:22.558822   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.558828   58701 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:22.622314   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.622344   58701 pod_ready.go:81] duration metric: took 63.505681ms for pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:22.622356   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.622370   58701 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5mbwx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:23.022239   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "kube-proxy-5mbwx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.022266   58701 pod_ready.go:81] duration metric: took 399.888837ms for pod "kube-proxy-5mbwx" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:23.022275   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "kube-proxy-5mbwx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.022286   58701 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:23.422213   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.422245   58701 pod_ready.go:81] duration metric: took 399.950443ms for pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:23.422257   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.422270   58701 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:23.823832   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.823858   58701 pod_ready.go:81] duration metric: took 401.581123ms for pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:23.823868   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.823875   58701 pod_ready.go:38] duration metric: took 1.286413141s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:49:23.823889   58701 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0410 22:49:23.840663   58701 ops.go:34] apiserver oom_adj: -16
	I0410 22:49:23.840691   58701 kubeadm.go:591] duration metric: took 9.497479077s to restartPrimaryControlPlane
	I0410 22:49:23.840702   58701 kubeadm.go:393] duration metric: took 9.546582608s to StartCluster
	I0410 22:49:23.840718   58701 settings.go:142] acquiring lock: {Name:mk5dc8e9a07a91433645b19ffba859d70c73be71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:49:23.840795   58701 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:49:23.843350   58701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/kubeconfig: {Name:mkd6f498bec10d7b0d2291f83f5e27766227bc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:49:23.843613   58701 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.170 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0410 22:49:23.845385   58701 out.go:177] * Verifying Kubernetes components...
	I0410 22:49:23.843685   58701 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0410 22:49:23.846686   58701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:49:23.845421   58701 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-519831"
	I0410 22:49:23.846834   58701 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-519831"
	I0410 22:49:23.843826   58701 config.go:182] Loaded profile config "default-k8s-diff-port-519831": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	W0410 22:49:23.846852   58701 addons.go:243] addon storage-provisioner should already be in state true
	I0410 22:49:23.846901   58701 host.go:66] Checking if "default-k8s-diff-port-519831" exists ...
	I0410 22:49:23.845429   58701 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-519831"
	I0410 22:49:23.846969   58701 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-519831"
	I0410 22:49:23.845433   58701 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-519831"
	I0410 22:49:23.847069   58701 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-519831"
	W0410 22:49:23.847088   58701 addons.go:243] addon metrics-server should already be in state true
	I0410 22:49:23.847122   58701 host.go:66] Checking if "default-k8s-diff-port-519831" exists ...
	I0410 22:49:23.847349   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.847358   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.847381   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.847384   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.847495   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.847532   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.863090   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33089
	I0410 22:49:23.863240   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33493
	I0410 22:49:23.863685   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.863793   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.864315   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.864333   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.864356   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.864371   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.864741   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.864749   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.864949   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetState
	I0410 22:49:23.865210   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.865258   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.867599   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41111
	I0410 22:49:23.868035   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.868627   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.868652   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.868739   58701 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-519831"
	W0410 22:49:23.868757   58701 addons.go:243] addon default-storageclass should already be in state true
	I0410 22:49:23.868785   58701 host.go:66] Checking if "default-k8s-diff-port-519831" exists ...
	I0410 22:49:23.869023   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.869094   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.869136   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.869562   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.869630   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.881589   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41925
	I0410 22:49:23.881997   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.882429   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.882442   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.882719   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.882914   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetState
	I0410 22:49:23.884708   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:23.886865   58701 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0410 22:49:23.886946   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37555
	I0410 22:49:23.888493   58701 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0410 22:49:23.888511   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0410 22:49:23.888532   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:23.888850   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.889129   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39287
	I0410 22:49:23.889513   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.889536   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.889601   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.890020   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.890265   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.890285   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.890308   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetState
	I0410 22:49:23.890667   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.891458   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.891496   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.892090   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.892232   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:23.894143   58701 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:20.015689   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:20.016192   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:20.016230   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:20.016163   59513 retry.go:31] will retry after 2.611766259s: waiting for machine to come up
	I0410 22:49:22.629270   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:22.629704   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:22.629731   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:22.629644   59513 retry.go:31] will retry after 3.270808972s: waiting for machine to come up
	I0410 22:49:23.892695   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:23.892720   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:23.895489   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.895599   58701 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:49:23.895609   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0410 22:49:23.895623   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:23.896367   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:23.896558   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:23.896754   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:23.898964   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.899320   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:23.899355   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.899535   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:23.899715   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:23.899855   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:23.899999   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:23.910046   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35713
	I0410 22:49:23.910471   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.911056   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.911077   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.911445   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.911653   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetState
	I0410 22:49:23.913330   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:23.913603   58701 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0410 22:49:23.913619   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0410 22:49:23.913637   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:23.916303   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.916759   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:23.916820   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.916923   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:23.917137   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:23.917377   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:23.917517   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:24.067636   58701 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:49:24.087396   58701 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-519831" to be "Ready" ...
	I0410 22:49:24.204429   58701 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0410 22:49:24.204457   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0410 22:49:24.213319   58701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0410 22:49:24.224083   58701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:49:24.234156   58701 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0410 22:49:24.234182   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0410 22:49:24.273950   58701 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:49:24.273980   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0410 22:49:24.295822   58701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:49:24.580460   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:24.580498   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:24.580835   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:24.580853   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:24.580864   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:24.580872   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:24.580872   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Closing plugin on server side
	I0410 22:49:24.581102   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:24.581126   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:24.589648   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:24.589714   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:24.589981   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Closing plugin on server side
	I0410 22:49:24.590040   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:24.590062   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:25.339438   58701 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.043578779s)
	I0410 22:49:25.339489   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:25.339499   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:25.339451   58701 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.115333809s)
	I0410 22:49:25.339560   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:25.339593   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:25.339872   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:25.339897   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:25.339911   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:25.339924   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:25.339944   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Closing plugin on server side
	I0410 22:49:25.339956   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:25.339984   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:25.340004   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:25.340015   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:25.340149   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:25.340185   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:25.340203   58701 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-519831"
	I0410 22:49:25.341481   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:25.341497   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:25.344575   58701 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0410 22:49:20.585629   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:21.084898   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:21.585346   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:22.085672   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:22.585768   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:23.085613   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:23.585507   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:24.085104   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:24.585745   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:25.084858   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:23.017917   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:25.018591   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:27.019206   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:25.341622   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Closing plugin on server side
	I0410 22:49:25.345974   58701 addons.go:505] duration metric: took 1.502302613s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0410 22:49:26.094458   58701 node_ready.go:53] node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:25.904062   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:25.904580   57270 main.go:141] libmachine: (no-preload-646133) Found IP for machine: 192.168.50.17
	I0410 22:49:25.904608   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has current primary IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:25.904622   57270 main.go:141] libmachine: (no-preload-646133) Reserving static IP address...
	I0410 22:49:25.905076   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "no-preload-646133", mac: "52:54:00:35:62:0e", ip: "192.168.50.17"} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:25.905117   57270 main.go:141] libmachine: (no-preload-646133) DBG | skip adding static IP to network mk-no-preload-646133 - found existing host DHCP lease matching {name: "no-preload-646133", mac: "52:54:00:35:62:0e", ip: "192.168.50.17"}
	I0410 22:49:25.905134   57270 main.go:141] libmachine: (no-preload-646133) Reserved static IP address: 192.168.50.17
	I0410 22:49:25.905151   57270 main.go:141] libmachine: (no-preload-646133) Waiting for SSH to be available...
	I0410 22:49:25.905170   57270 main.go:141] libmachine: (no-preload-646133) DBG | Getting to WaitForSSH function...
	I0410 22:49:25.907397   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:25.907773   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:25.907796   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:25.907937   57270 main.go:141] libmachine: (no-preload-646133) DBG | Using SSH client type: external
	I0410 22:49:25.907960   57270 main.go:141] libmachine: (no-preload-646133) DBG | Using SSH private key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa (-rw-------)
	I0410 22:49:25.907979   57270 main.go:141] libmachine: (no-preload-646133) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.17 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0410 22:49:25.907989   57270 main.go:141] libmachine: (no-preload-646133) DBG | About to run SSH command:
	I0410 22:49:25.907997   57270 main.go:141] libmachine: (no-preload-646133) DBG | exit 0
	I0410 22:49:26.032683   57270 main.go:141] libmachine: (no-preload-646133) DBG | SSH cmd err, output: <nil>: 
	I0410 22:49:26.033065   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetConfigRaw
	I0410 22:49:26.033761   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetIP
	I0410 22:49:26.036545   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.036951   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.036982   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.037187   57270 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/config.json ...
	I0410 22:49:26.037403   57270 machine.go:94] provisionDockerMachine start ...
	I0410 22:49:26.037424   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:26.037655   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.039750   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.040081   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.040102   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.040285   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:26.040486   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.040657   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.040818   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:26.040972   57270 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:26.041180   57270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0410 22:49:26.041197   57270 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 22:49:26.149298   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0410 22:49:26.149335   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetMachineName
	I0410 22:49:26.149618   57270 buildroot.go:166] provisioning hostname "no-preload-646133"
	I0410 22:49:26.149647   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetMachineName
	I0410 22:49:26.149849   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.152432   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.152799   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.152829   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.152973   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:26.153233   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.153406   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.153571   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:26.153774   57270 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:26.153992   57270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0410 22:49:26.154010   57270 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-646133 && echo "no-preload-646133" | sudo tee /etc/hostname
	I0410 22:49:26.283760   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-646133
	
	I0410 22:49:26.283794   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.286605   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.286925   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.286955   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.287097   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:26.287277   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.287425   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.287551   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:26.287725   57270 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:26.287944   57270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0410 22:49:26.287969   57270 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-646133' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-646133/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-646133' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:49:26.402869   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:49:26.402905   57270 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:49:26.402945   57270 buildroot.go:174] setting up certificates
	I0410 22:49:26.402956   57270 provision.go:84] configureAuth start
	I0410 22:49:26.402973   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetMachineName
	I0410 22:49:26.403234   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetIP
	I0410 22:49:26.405718   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.406079   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.406119   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.406357   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.408549   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.408882   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.408917   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.409034   57270 provision.go:143] copyHostCerts
	I0410 22:49:26.409106   57270 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:49:26.409124   57270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:49:26.409177   57270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:49:26.409310   57270 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:49:26.409320   57270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:49:26.409341   57270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:49:26.409405   57270 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:49:26.409412   57270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:49:26.409430   57270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:49:26.409476   57270 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.no-preload-646133 san=[127.0.0.1 192.168.50.17 localhost minikube no-preload-646133]
	I0410 22:49:26.567556   57270 provision.go:177] copyRemoteCerts
	I0410 22:49:26.567611   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:49:26.567647   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.570205   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.570589   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.570614   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.570805   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:26.571034   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.571172   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:26.571294   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:49:26.655943   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:49:26.681691   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0410 22:49:26.706573   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0410 22:49:26.733054   57270 provision.go:87] duration metric: took 330.073783ms to configureAuth
	I0410 22:49:26.733088   57270 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:49:26.733276   57270 config.go:182] Loaded profile config "no-preload-646133": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.1
	I0410 22:49:26.733347   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.735910   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.736264   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.736295   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.736474   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:26.736648   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.736798   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.736925   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:26.737055   57270 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:26.737225   57270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0410 22:49:26.737241   57270 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:49:27.008174   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:49:27.008202   57270 machine.go:97] duration metric: took 970.785508ms to provisionDockerMachine
	I0410 22:49:27.008216   57270 start.go:293] postStartSetup for "no-preload-646133" (driver="kvm2")
	I0410 22:49:27.008236   57270 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:49:27.008263   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:27.008554   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:49:27.008580   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:27.011150   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.011561   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:27.011604   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.011900   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:27.012090   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:27.012274   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:27.012432   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:49:27.105247   57270 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:49:27.109842   57270 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:49:27.109868   57270 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:49:27.109927   57270 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:49:27.109993   57270 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:49:27.110080   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:49:27.121451   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:49:27.151797   57270 start.go:296] duration metric: took 143.569287ms for postStartSetup
	I0410 22:49:27.151836   57270 fix.go:56] duration metric: took 19.642403615s for fixHost
	I0410 22:49:27.151865   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:27.154454   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.154869   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:27.154903   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.154987   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:27.155193   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:27.155357   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:27.155512   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:27.155660   57270 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:27.155862   57270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0410 22:49:27.155875   57270 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 22:49:27.265609   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712789367.209761579
	
	I0410 22:49:27.265652   57270 fix.go:216] guest clock: 1712789367.209761579
	I0410 22:49:27.265662   57270 fix.go:229] Guest: 2024-04-10 22:49:27.209761579 +0000 UTC Remote: 2024-04-10 22:49:27.151840464 +0000 UTC m=+377.371052419 (delta=57.921115ms)
	I0410 22:49:27.265687   57270 fix.go:200] guest clock delta is within tolerance: 57.921115ms
	I0410 22:49:27.265697   57270 start.go:83] releasing machines lock for "no-preload-646133", held for 19.756293566s
	I0410 22:49:27.265724   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:27.265960   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetIP
	I0410 22:49:27.268735   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.269184   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:27.269216   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.269380   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:27.270014   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:27.270233   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:27.270331   57270 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:49:27.270376   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:27.270645   57270 ssh_runner.go:195] Run: cat /version.json
	I0410 22:49:27.270669   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:27.273542   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.273846   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.273986   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:27.274019   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.274140   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:27.274230   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:27.274259   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.274318   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:27.274400   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:27.274531   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:27.274536   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:27.274688   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:27.274723   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:49:27.274806   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:49:27.359922   57270 ssh_runner.go:195] Run: systemctl --version
	I0410 22:49:27.400885   57270 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:49:27.555260   57270 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 22:49:27.561275   57270 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:49:27.561333   57270 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:49:27.578478   57270 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0410 22:49:27.578502   57270 start.go:494] detecting cgroup driver to use...
	I0410 22:49:27.578567   57270 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:49:27.598020   57270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:49:27.613068   57270 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:49:27.613140   57270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:49:27.629253   57270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:49:27.644130   57270 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:49:27.791801   57270 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:49:27.952366   57270 docker.go:233] disabling docker service ...
	I0410 22:49:27.952477   57270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:49:27.968629   57270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:49:27.982330   57270 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:49:28.117396   57270 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:49:28.240808   57270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:49:28.257299   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:49:28.280918   57270 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0410 22:49:28.280991   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.296415   57270 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:49:28.296480   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.308602   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.319535   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.329812   57270 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:49:28.341466   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.354706   57270 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.374405   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.385094   57270 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:49:28.394412   57270 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0410 22:49:28.394466   57270 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0410 22:49:28.407654   57270 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:49:28.418381   57270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:49:28.525783   57270 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:49:28.678643   57270 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 22:49:28.678706   57270 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 22:49:28.683681   57270 start.go:562] Will wait 60s for crictl version
	I0410 22:49:28.683737   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:28.687703   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 22:49:28.725311   57270 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 22:49:28.725414   57270 ssh_runner.go:195] Run: crio --version
	I0410 22:49:28.755393   57270 ssh_runner.go:195] Run: crio --version
	I0410 22:49:28.788963   57270 out.go:177] * Preparing Kubernetes v1.30.0-rc.1 on CRI-O 1.29.1 ...
	I0410 22:49:28.790274   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetIP
	I0410 22:49:28.793091   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:28.793418   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:28.793452   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:28.793659   57270 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0410 22:49:28.798916   57270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:49:28.814575   57270 kubeadm.go:877] updating cluster {Name:no-preload-646133 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.1 ClusterName:no-preload-646133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.17 Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 22:49:28.814689   57270 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.1 and runtime crio
	I0410 22:49:28.814717   57270 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:49:28.852604   57270 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.1". assuming images are not preloaded.
	I0410 22:49:28.852627   57270 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-rc.1 registry.k8s.io/kube-controller-manager:v1.30.0-rc.1 registry.k8s.io/kube-scheduler:v1.30.0-rc.1 registry.k8s.io/kube-proxy:v1.30.0-rc.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0410 22:49:28.852698   57270 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:28.852707   57270 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.1
	I0410 22:49:28.852733   57270 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0410 22:49:28.852756   57270 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0410 22:49:28.852803   57270 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.1
	I0410 22:49:28.852870   57270 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.1
	I0410 22:49:28.852890   57270 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.1
	I0410 22:49:28.852917   57270 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0410 22:49:28.854348   57270 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.1
	I0410 22:49:28.854354   57270 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.1
	I0410 22:49:28.854378   57270 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0410 22:49:28.854419   57270 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0410 22:49:28.854421   57270 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.1
	I0410 22:49:28.854355   57270 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.1
	I0410 22:49:28.854353   57270 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:28.854740   57270 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0410 22:49:29.066608   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0410 22:49:29.072486   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-rc.1
	I0410 22:49:29.073347   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0410 22:49:29.075270   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0410 22:49:29.082649   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-rc.1
	I0410 22:49:29.085737   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-rc.1
	I0410 22:49:29.093699   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-rc.1
	I0410 22:49:29.290780   57270 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-rc.1" does not exist at hash "ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b" in container runtime
	I0410 22:49:29.290810   57270 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0410 22:49:29.290839   57270 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0410 22:49:29.290837   57270 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-rc.1
	I0410 22:49:29.290849   57270 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0410 22:49:29.290871   57270 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0410 22:49:29.290882   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.290902   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.290882   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.304346   57270 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-rc.1" does not exist at hash "69f89e5e13a4119f8752cd7f0fb30e3db4ce480a5e08580a3bf72597464bd061" in container runtime
	I0410 22:49:29.304409   57270 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-rc.1
	I0410 22:49:29.304459   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.304510   57270 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-rc.1" does not exist at hash "bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895" in container runtime
	I0410 22:49:29.304599   57270 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-rc.1
	I0410 22:49:29.304635   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.304563   57270 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-rc.1" does not exist at hash "577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090" in container runtime
	I0410 22:49:29.304689   57270 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.1
	I0410 22:49:29.304738   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.311219   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0410 22:49:29.311264   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.1
	I0410 22:49:29.311311   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0410 22:49:29.324663   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-rc.1
	I0410 22:49:29.324770   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.1
	I0410 22:49:29.324855   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-rc.1
	I0410 22:49:29.442426   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0410 22:49:29.442541   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0410 22:49:29.458416   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0410 22:49:29.458526   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0410 22:49:29.468890   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.1
	I0410 22:49:29.468998   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.1
	I0410 22:49:29.481365   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.1
	I0410 22:49:29.481482   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.1
	I0410 22:49:29.498862   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.1
	I0410 22:49:29.498899   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0410 22:49:29.498913   57270 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0410 22:49:29.498927   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.1
	I0410 22:49:29.498951   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.1 (exists)
	I0410 22:49:29.498957   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0410 22:49:29.498964   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.1
	I0410 22:49:29.498982   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-rc.1 (exists)
	I0410 22:49:29.499012   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.1
	I0410 22:49:29.498926   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0410 22:49:29.507249   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.1 (exists)
	I0410 22:49:29.507282   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.1 (exists)
	I0410 22:49:29.751612   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:25.585095   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:26.085119   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:26.585846   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:27.084920   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:27.585251   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:28.084926   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:28.585643   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:29.084937   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:29.585666   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:30.085088   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:29.518476   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:31.518837   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:28.592323   58701 node_ready.go:53] node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:31.098027   58701 node_ready.go:53] node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:31.591789   58701 node_ready.go:49] node "default-k8s-diff-port-519831" has status "Ready":"True"
	I0410 22:49:31.591822   58701 node_ready.go:38] duration metric: took 7.504383585s for node "default-k8s-diff-port-519831" to be "Ready" ...
	I0410 22:49:31.591835   58701 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:49:31.599103   58701 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-ghnvx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:31.607758   58701 pod_ready.go:92] pod "coredns-76f75df574-ghnvx" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:31.607787   58701 pod_ready.go:81] duration metric: took 8.655521ms for pod "coredns-76f75df574-ghnvx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:31.607801   58701 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:33.690936   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.191950196s)
	I0410 22:49:33.690965   57270 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.939318786s)
	I0410 22:49:33.691014   57270 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0410 22:49:33.691045   57270 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:33.690973   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0410 22:49:33.691091   57270 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.1
	I0410 22:49:33.691101   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:33.691163   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.1
	I0410 22:49:33.695868   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:30.585515   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:31.085273   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:31.585347   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:32.084892   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:32.585361   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:33.085648   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:33.585256   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:34.084938   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:34.585005   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:35.085466   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:34.018733   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:36.019904   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:33.615785   58701 pod_ready.go:102] pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:35.115811   58701 pod_ready.go:92] pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:35.115846   58701 pod_ready.go:81] duration metric: took 3.508038321s for pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:35.115856   58701 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.123593   58701 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:37.123624   58701 pod_ready.go:81] duration metric: took 2.007760022s for pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.123638   58701 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.130390   58701 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:37.130421   58701 pod_ready.go:81] duration metric: took 6.771239ms for pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.130436   58701 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5mbwx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.136219   58701 pod_ready.go:92] pod "kube-proxy-5mbwx" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:37.136253   58701 pod_ready.go:81] duration metric: took 5.809077ms for pod "kube-proxy-5mbwx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.136265   58701 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.142909   58701 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:37.142939   58701 pod_ready.go:81] duration metric: took 6.664922ms for pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.142954   58701 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:35.767190   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.1: (2.075997626s)
	I0410 22:49:35.767227   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.1 from cache
	I0410 22:49:35.767261   57270 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-rc.1
	I0410 22:49:35.767278   57270 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.071386498s)
	I0410 22:49:35.767326   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.1
	I0410 22:49:35.767327   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0410 22:49:35.767497   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0410 22:49:35.773679   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0410 22:49:37.666289   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.1: (1.898906389s)
	I0410 22:49:37.666326   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.1 from cache
	I0410 22:49:37.666358   57270 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0410 22:49:37.666422   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0410 22:49:39.652778   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.986322091s)
	I0410 22:49:39.652820   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0410 22:49:39.652855   57270 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.1
	I0410 22:49:39.652951   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.1
	I0410 22:49:35.585228   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:36.085699   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:36.585690   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:37.085760   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:37.584867   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:37.584947   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:37.625964   57719 cri.go:89] found id: ""
	I0410 22:49:37.625989   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.625996   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:37.626001   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:37.626046   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:37.669151   57719 cri.go:89] found id: ""
	I0410 22:49:37.669178   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.669188   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:37.669194   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:37.669242   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:37.711426   57719 cri.go:89] found id: ""
	I0410 22:49:37.711456   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.711466   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:37.711474   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:37.711538   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:37.754678   57719 cri.go:89] found id: ""
	I0410 22:49:37.754707   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.754719   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:37.754726   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:37.754809   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:37.795259   57719 cri.go:89] found id: ""
	I0410 22:49:37.795291   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.795301   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:37.795307   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:37.795375   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:37.836961   57719 cri.go:89] found id: ""
	I0410 22:49:37.836994   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.837004   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:37.837011   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:37.837075   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:37.876195   57719 cri.go:89] found id: ""
	I0410 22:49:37.876223   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.876233   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:37.876239   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:37.876290   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:37.911688   57719 cri.go:89] found id: ""
	I0410 22:49:37.911715   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.911725   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:37.911736   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:37.911751   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:37.954690   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:37.954734   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:38.006731   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:38.006771   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:38.024290   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:38.024314   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:38.148504   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:38.148529   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:38.148561   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:38.519483   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:40.520822   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:39.150543   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:41.151300   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:42.217749   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.1: (2.564772479s)
	I0410 22:49:42.217778   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.1 from cache
	I0410 22:49:42.217802   57270 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.1
	I0410 22:49:42.217843   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.1
	I0410 22:49:44.577826   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.1: (2.359955682s)
	I0410 22:49:44.577865   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.1 from cache
	I0410 22:49:44.577892   57270 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0410 22:49:44.577940   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0410 22:49:40.726314   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:40.743098   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:40.743168   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:40.794673   57719 cri.go:89] found id: ""
	I0410 22:49:40.794697   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.794704   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:40.794710   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:40.794756   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:40.836274   57719 cri.go:89] found id: ""
	I0410 22:49:40.836308   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.836319   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:40.836327   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:40.836408   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:40.882249   57719 cri.go:89] found id: ""
	I0410 22:49:40.882276   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.882285   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:40.882292   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:40.882357   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:40.925829   57719 cri.go:89] found id: ""
	I0410 22:49:40.925867   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.925878   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:40.925885   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:40.925936   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:40.978494   57719 cri.go:89] found id: ""
	I0410 22:49:40.978529   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.978540   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:40.978547   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:40.978611   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:41.020935   57719 cri.go:89] found id: ""
	I0410 22:49:41.020964   57719 logs.go:276] 0 containers: []
	W0410 22:49:41.020975   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:41.020982   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:41.021040   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:41.060779   57719 cri.go:89] found id: ""
	I0410 22:49:41.060812   57719 logs.go:276] 0 containers: []
	W0410 22:49:41.060824   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:41.060831   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:41.060885   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:41.119604   57719 cri.go:89] found id: ""
	I0410 22:49:41.119632   57719 logs.go:276] 0 containers: []
	W0410 22:49:41.119643   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:41.119653   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:41.119667   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:41.188739   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:41.188774   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:41.203682   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:41.203735   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:41.293423   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:41.293451   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:41.293468   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:41.366606   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:41.366649   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:43.914447   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:43.930350   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:43.930439   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:43.968867   57719 cri.go:89] found id: ""
	I0410 22:49:43.968921   57719 logs.go:276] 0 containers: []
	W0410 22:49:43.968932   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:43.968939   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:43.969012   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:44.010143   57719 cri.go:89] found id: ""
	I0410 22:49:44.010169   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.010181   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:44.010188   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:44.010264   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:44.048610   57719 cri.go:89] found id: ""
	I0410 22:49:44.048637   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.048645   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:44.048651   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:44.048697   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:44.105939   57719 cri.go:89] found id: ""
	I0410 22:49:44.105973   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.106001   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:44.106009   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:44.106086   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:44.149699   57719 cri.go:89] found id: ""
	I0410 22:49:44.149726   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.149735   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:44.149743   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:44.149803   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:44.193131   57719 cri.go:89] found id: ""
	I0410 22:49:44.193159   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.193167   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:44.193173   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:44.193255   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:44.233751   57719 cri.go:89] found id: ""
	I0410 22:49:44.233781   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.233789   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:44.233801   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:44.233868   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:44.284404   57719 cri.go:89] found id: ""
	I0410 22:49:44.284432   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.284441   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:44.284449   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:44.284461   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:44.330082   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:44.330118   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:44.383452   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:44.383487   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:44.399604   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:44.399632   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:44.476328   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:44.476368   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:44.476415   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:43.019922   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:45.519954   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:43.650596   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:45.651668   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:45.537183   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0410 22:49:45.537228   57270 cache_images.go:123] Successfully loaded all cached images
	I0410 22:49:45.537235   57270 cache_images.go:92] duration metric: took 16.68459637s to LoadCachedImages
	I0410 22:49:45.537249   57270 kubeadm.go:928] updating node { 192.168.50.17 8443 v1.30.0-rc.1 crio true true} ...
	I0410 22:49:45.537401   57270 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-646133 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.1 ClusterName:no-preload-646133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 22:49:45.537476   57270 ssh_runner.go:195] Run: crio config
	I0410 22:49:45.587002   57270 cni.go:84] Creating CNI manager for ""
	I0410 22:49:45.587031   57270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:49:45.587047   57270 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 22:49:45.587069   57270 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.17 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-646133 NodeName:no-preload-646133 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.17"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.17 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0410 22:49:45.587205   57270 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.17
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-646133"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.17
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.17"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 22:49:45.587272   57270 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.1
	I0410 22:49:45.600694   57270 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 22:49:45.600758   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 22:49:45.613884   57270 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0410 22:49:45.633871   57270 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0410 22:49:45.654733   57270 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0410 22:49:45.673976   57270 ssh_runner.go:195] Run: grep 192.168.50.17	control-plane.minikube.internal$ /etc/hosts
	I0410 22:49:45.678260   57270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.17	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:49:45.693499   57270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:49:45.819034   57270 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:49:45.838775   57270 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133 for IP: 192.168.50.17
	I0410 22:49:45.838799   57270 certs.go:194] generating shared ca certs ...
	I0410 22:49:45.838819   57270 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:49:45.839010   57270 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 22:49:45.839064   57270 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 22:49:45.839078   57270 certs.go:256] generating profile certs ...
	I0410 22:49:45.839175   57270 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/client.key
	I0410 22:49:45.839256   57270 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/apiserver.key.d257fb06
	I0410 22:49:45.839310   57270 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/proxy-client.key
	I0410 22:49:45.839480   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 22:49:45.839521   57270 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 22:49:45.839531   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 22:49:45.839551   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 22:49:45.839608   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 22:49:45.839633   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 22:49:45.839674   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:49:45.840315   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 22:49:45.897688   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 22:49:45.932242   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 22:49:45.979537   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 22:49:46.020562   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0410 22:49:46.057254   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0410 22:49:46.084070   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 22:49:46.112807   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0410 22:49:46.141650   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 22:49:46.170167   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 22:49:46.196917   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 22:49:46.222645   57270 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 22:49:46.242626   57270 ssh_runner.go:195] Run: openssl version
	I0410 22:49:46.249048   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 22:49:46.265110   57270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:46.270018   57270 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:46.270083   57270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:46.276298   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 22:49:46.288165   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 22:49:46.299040   57270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 22:49:46.303584   57270 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:49:46.303627   57270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 22:49:46.309278   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 22:49:46.319990   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 22:49:46.331654   57270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 22:49:46.336700   57270 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:49:46.336750   57270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 22:49:46.342767   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 22:49:46.355005   57270 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:49:46.359870   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0410 22:49:46.366270   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0410 22:49:46.372625   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0410 22:49:46.379270   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0410 22:49:46.386312   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0410 22:49:46.392796   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0410 22:49:46.399209   57270 kubeadm.go:391] StartCluster: {Name:no-preload-646133 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.1 ClusterName:no-preload-646133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.17 Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:49:46.399318   57270 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 22:49:46.399405   57270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:49:46.439061   57270 cri.go:89] found id: ""
	I0410 22:49:46.439149   57270 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0410 22:49:46.450243   57270 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0410 22:49:46.450265   57270 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0410 22:49:46.450271   57270 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0410 22:49:46.450323   57270 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0410 22:49:46.460553   57270 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0410 22:49:46.461608   57270 kubeconfig.go:125] found "no-preload-646133" server: "https://192.168.50.17:8443"
	I0410 22:49:46.464469   57270 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0410 22:49:46.474775   57270 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.17
	I0410 22:49:46.474808   57270 kubeadm.go:1154] stopping kube-system containers ...
	I0410 22:49:46.474820   57270 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0410 22:49:46.474860   57270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:49:46.514933   57270 cri.go:89] found id: ""
	I0410 22:49:46.515010   57270 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0410 22:49:46.533830   57270 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:49:46.547026   57270 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:49:46.547042   57270 kubeadm.go:156] found existing configuration files:
	
	I0410 22:49:46.547081   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:49:46.557093   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:49:46.557157   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:49:46.567102   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:49:46.576939   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:49:46.576998   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:49:46.586921   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:49:46.596189   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:49:46.596260   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:49:46.607803   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:49:46.618166   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:49:46.618240   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:49:46.628406   57270 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:49:46.638748   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:46.767824   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:48.028868   57270 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.261006059s)
	I0410 22:49:48.028907   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:48.253185   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:48.323164   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:48.404069   57270 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:49:48.404153   57270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:48.904557   57270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:49.404477   57270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:49.437891   57270 api_server.go:72] duration metric: took 1.033818826s to wait for apiserver process to appear ...
	I0410 22:49:49.437927   57270 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:49:49.437953   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:49.438623   57270 api_server.go:269] stopped: https://192.168.50.17:8443/healthz: Get "https://192.168.50.17:8443/healthz": dial tcp 192.168.50.17:8443: connect: connection refused
	I0410 22:49:47.054122   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:47.069583   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:47.069654   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:47.113953   57719 cri.go:89] found id: ""
	I0410 22:49:47.113981   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.113989   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:47.113995   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:47.114054   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:47.156770   57719 cri.go:89] found id: ""
	I0410 22:49:47.156798   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.156808   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:47.156814   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:47.156891   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:47.195227   57719 cri.go:89] found id: ""
	I0410 22:49:47.195252   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.195261   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:47.195266   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:47.195328   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:47.238109   57719 cri.go:89] found id: ""
	I0410 22:49:47.238138   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.238150   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:47.238157   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:47.238212   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:47.285062   57719 cri.go:89] found id: ""
	I0410 22:49:47.285093   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.285101   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:47.285108   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:47.285185   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:47.324635   57719 cri.go:89] found id: ""
	I0410 22:49:47.324663   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.324670   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:47.324676   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:47.324744   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:47.365404   57719 cri.go:89] found id: ""
	I0410 22:49:47.365437   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.365445   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:47.365468   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:47.365535   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:47.412296   57719 cri.go:89] found id: ""
	I0410 22:49:47.412335   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.412346   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:47.412367   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:47.412384   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:47.497998   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:47.498019   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:47.498033   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:47.590502   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:47.590536   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:47.647665   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:47.647692   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:47.697704   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:47.697741   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:50.213410   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:50.229408   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:50.229488   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:50.268514   57719 cri.go:89] found id: ""
	I0410 22:49:50.268545   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.268556   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:50.268563   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:50.268620   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:50.308733   57719 cri.go:89] found id: ""
	I0410 22:49:50.308762   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.308790   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:50.308796   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:50.308857   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:50.353929   57719 cri.go:89] found id: ""
	I0410 22:49:50.353966   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.353977   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:50.353985   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:50.354043   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:50.397979   57719 cri.go:89] found id: ""
	I0410 22:49:50.398009   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.398019   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:50.398026   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:50.398086   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:47.521284   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:50.018571   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:52.020874   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:48.151768   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:50.151820   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:49.939075   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:52.355813   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0410 22:49:52.355855   57270 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0410 22:49:52.355868   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:52.502702   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0410 22:49:52.502733   57270 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0410 22:49:52.502796   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:52.509360   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0410 22:49:52.509401   57270 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0410 22:49:52.939056   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:52.946114   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0410 22:49:52.946154   57270 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0410 22:49:53.438741   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:53.444154   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0410 22:49:53.444187   57270 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0410 22:49:53.938848   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:53.947578   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 200:
	ok
	I0410 22:49:53.956247   57270 api_server.go:141] control plane version: v1.30.0-rc.1
	I0410 22:49:53.956281   57270 api_server.go:131] duration metric: took 4.518344859s to wait for apiserver health ...
	I0410 22:49:53.956292   57270 cni.go:84] Creating CNI manager for ""
	I0410 22:49:53.956301   57270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:49:53.958053   57270 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0410 22:49:53.959420   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0410 22:49:53.973242   57270 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0410 22:49:54.004623   57270 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:49:54.024138   57270 system_pods.go:59] 8 kube-system pods found
	I0410 22:49:54.024185   57270 system_pods.go:61] "coredns-7db6d8ff4d-lbcp6" [1ff36529-d718-41e7-9b61-54ba32efab0e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0410 22:49:54.024195   57270 system_pods.go:61] "etcd-no-preload-646133" [a704a953-1418-4425-8ac1-272c632050c7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0410 22:49:54.024214   57270 system_pods.go:61] "kube-apiserver-no-preload-646133" [90d4ff18-767c-4dbf-b4ad-ff02cb3d542f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0410 22:49:54.024231   57270 system_pods.go:61] "kube-controller-manager-no-preload-646133" [82c0778e-690f-41a6-a57f-017ab79fd029] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0410 22:49:54.024243   57270 system_pods.go:61] "kube-proxy-v5fbl" [002efd18-4375-455b-9b4a-15bb739120e0] Running
	I0410 22:49:54.024252   57270 system_pods.go:61] "kube-scheduler-no-preload-646133" [fa9898bc-36a6-4cc4-91e6-bba4ccd22d9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0410 22:49:54.024264   57270 system_pods.go:61] "metrics-server-569cc877fc-pw276" [22de5c2f-13ab-4f69-8eb6-ec4a3c3d1e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:49:54.024277   57270 system_pods.go:61] "storage-provisioner" [1028921e-3924-4614-bcb6-f949c18e9e4e] Running
	I0410 22:49:54.024287   57270 system_pods.go:74] duration metric: took 19.638409ms to wait for pod list to return data ...
	I0410 22:49:54.024301   57270 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:49:54.031666   57270 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:49:54.031694   57270 node_conditions.go:123] node cpu capacity is 2
	I0410 22:49:54.031705   57270 node_conditions.go:105] duration metric: took 7.394201ms to run NodePressure ...
	I0410 22:49:54.031720   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:54.339352   57270 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0410 22:49:54.345115   57270 kubeadm.go:733] kubelet initialised
	I0410 22:49:54.345146   57270 kubeadm.go:734] duration metric: took 5.76519ms waiting for restarted kubelet to initialise ...
	I0410 22:49:54.345156   57270 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:49:54.352254   57270 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-lbcp6" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:50.436191   57719 cri.go:89] found id: ""
	I0410 22:49:50.436222   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.436234   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:50.436241   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:50.436316   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:50.476462   57719 cri.go:89] found id: ""
	I0410 22:49:50.476486   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.476494   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:50.476499   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:50.476557   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:50.520025   57719 cri.go:89] found id: ""
	I0410 22:49:50.520054   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.520063   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:50.520071   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:50.520127   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:50.564535   57719 cri.go:89] found id: ""
	I0410 22:49:50.564570   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.564581   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:50.564593   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:50.564624   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:50.620587   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:50.620629   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:50.634802   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:50.634832   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:50.707625   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:50.707655   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:50.707671   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:50.791935   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:50.791970   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:53.339109   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:53.361555   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:53.361632   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:53.428170   57719 cri.go:89] found id: ""
	I0410 22:49:53.428202   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.428212   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:53.428219   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:53.428281   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:53.501929   57719 cri.go:89] found id: ""
	I0410 22:49:53.501957   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.501968   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:53.501977   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:53.502055   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:53.548844   57719 cri.go:89] found id: ""
	I0410 22:49:53.548871   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.548890   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:53.548897   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:53.548949   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:53.595056   57719 cri.go:89] found id: ""
	I0410 22:49:53.595081   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.595090   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:53.595098   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:53.595153   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:53.638885   57719 cri.go:89] found id: ""
	I0410 22:49:53.638920   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.638938   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:53.638946   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:53.639046   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:53.685526   57719 cri.go:89] found id: ""
	I0410 22:49:53.685565   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.685573   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:53.685579   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:53.685650   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:53.725084   57719 cri.go:89] found id: ""
	I0410 22:49:53.725112   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.725119   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:53.725125   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:53.725172   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:53.767031   57719 cri.go:89] found id: ""
	I0410 22:49:53.767062   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.767072   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:53.767083   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:53.767103   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:53.826570   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:53.826618   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:53.843784   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:53.843822   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:53.926277   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:53.926299   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:53.926317   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:54.024735   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:54.024782   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:54.519305   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:56.520139   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:52.651382   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:55.149798   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:57.150803   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:56.359479   57270 pod_ready.go:102] pod "coredns-7db6d8ff4d-lbcp6" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:58.859341   57270 pod_ready.go:102] pod "coredns-7db6d8ff4d-lbcp6" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:56.586265   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:56.602113   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:56.602200   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:56.647041   57719 cri.go:89] found id: ""
	I0410 22:49:56.647074   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.647086   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:56.647094   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:56.647168   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:56.688053   57719 cri.go:89] found id: ""
	I0410 22:49:56.688086   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.688096   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:56.688104   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:56.688190   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:56.729176   57719 cri.go:89] found id: ""
	I0410 22:49:56.729210   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.729221   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:56.729229   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:56.729293   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:56.768877   57719 cri.go:89] found id: ""
	I0410 22:49:56.768905   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.768913   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:56.768919   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:56.768966   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:56.807228   57719 cri.go:89] found id: ""
	I0410 22:49:56.807274   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.807286   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:56.807294   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:56.807361   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:56.848183   57719 cri.go:89] found id: ""
	I0410 22:49:56.848216   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.848224   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:56.848230   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:56.848284   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:56.887894   57719 cri.go:89] found id: ""
	I0410 22:49:56.887923   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.887931   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:56.887937   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:56.887993   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:56.926908   57719 cri.go:89] found id: ""
	I0410 22:49:56.926935   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.926944   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:56.926952   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:56.926968   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:57.012614   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:57.012640   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:57.012657   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:57.098735   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:57.098784   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:57.140798   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:57.140831   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:57.204239   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:57.204283   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:59.720328   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:59.735964   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:59.736042   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:59.774351   57719 cri.go:89] found id: ""
	I0410 22:49:59.774383   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.774393   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:59.774407   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:59.774468   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:59.817222   57719 cri.go:89] found id: ""
	I0410 22:49:59.817248   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.817255   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:59.817260   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:59.817310   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:59.854551   57719 cri.go:89] found id: ""
	I0410 22:49:59.854582   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.854594   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:59.854602   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:59.854656   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:59.894334   57719 cri.go:89] found id: ""
	I0410 22:49:59.894367   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.894375   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:59.894381   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:59.894442   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:59.932446   57719 cri.go:89] found id: ""
	I0410 22:49:59.932472   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.932482   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:59.932489   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:59.932552   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:59.969168   57719 cri.go:89] found id: ""
	I0410 22:49:59.969193   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.969201   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:59.969209   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:59.969273   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:00.006918   57719 cri.go:89] found id: ""
	I0410 22:50:00.006960   57719 logs.go:276] 0 containers: []
	W0410 22:50:00.006972   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:00.006979   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:00.007036   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:00.050380   57719 cri.go:89] found id: ""
	I0410 22:50:00.050411   57719 logs.go:276] 0 containers: []
	W0410 22:50:00.050424   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:00.050433   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:00.050454   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:00.066340   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:00.066366   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:00.146454   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:00.146479   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:00.146494   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:00.231174   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:00.231225   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:00.278732   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:00.278759   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:59.020938   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:01.518584   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:59.151137   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:01.650307   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:01.359992   57270 pod_ready.go:92] pod "coredns-7db6d8ff4d-lbcp6" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:01.360021   57270 pod_ready.go:81] duration metric: took 7.007734788s for pod "coredns-7db6d8ff4d-lbcp6" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:01.360035   57270 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:02.867322   57270 pod_ready.go:92] pod "etcd-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:02.867349   57270 pod_ready.go:81] duration metric: took 1.507305949s for pod "etcd-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:02.867362   57270 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:02.833035   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:02.847316   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:02.847380   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:02.888793   57719 cri.go:89] found id: ""
	I0410 22:50:02.888821   57719 logs.go:276] 0 containers: []
	W0410 22:50:02.888832   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:02.888840   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:02.888897   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:02.926495   57719 cri.go:89] found id: ""
	I0410 22:50:02.926525   57719 logs.go:276] 0 containers: []
	W0410 22:50:02.926535   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:02.926542   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:02.926603   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:02.966185   57719 cri.go:89] found id: ""
	I0410 22:50:02.966217   57719 logs.go:276] 0 containers: []
	W0410 22:50:02.966227   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:02.966233   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:02.966295   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:03.007383   57719 cri.go:89] found id: ""
	I0410 22:50:03.007408   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.007414   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:03.007420   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:03.007490   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:03.044245   57719 cri.go:89] found id: ""
	I0410 22:50:03.044273   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.044281   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:03.044292   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:03.044367   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:03.078820   57719 cri.go:89] found id: ""
	I0410 22:50:03.078849   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.078859   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:03.078866   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:03.078927   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:03.117205   57719 cri.go:89] found id: ""
	I0410 22:50:03.117233   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.117244   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:03.117251   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:03.117313   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:03.155698   57719 cri.go:89] found id: ""
	I0410 22:50:03.155725   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.155735   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:03.155743   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:03.155758   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:03.231685   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:03.231712   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:03.231724   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:03.315122   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:03.315167   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:03.361151   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:03.361186   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:03.412134   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:03.412168   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:04.017523   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:06.024382   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:04.150291   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:06.151488   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:04.873656   57270 pod_ready.go:102] pod "kube-apiserver-no-preload-646133" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:05.874079   57270 pod_ready.go:92] pod "kube-apiserver-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:05.874106   57270 pod_ready.go:81] duration metric: took 3.006735064s for pod "kube-apiserver-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:05.874116   57270 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:07.880447   57270 pod_ready.go:102] pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:08.881209   57270 pod_ready.go:92] pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:08.881241   57270 pod_ready.go:81] duration metric: took 3.007117254s for pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:08.881271   57270 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v5fbl" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:08.887939   57270 pod_ready.go:92] pod "kube-proxy-v5fbl" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:08.887963   57270 pod_ready.go:81] duration metric: took 6.68304ms for pod "kube-proxy-v5fbl" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:08.887975   57270 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:08.894389   57270 pod_ready.go:92] pod "kube-scheduler-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:08.894415   57270 pod_ready.go:81] duration metric: took 6.43215ms for pod "kube-scheduler-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:08.894428   57270 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:05.928116   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:05.942237   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:05.942337   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:05.983813   57719 cri.go:89] found id: ""
	I0410 22:50:05.983842   57719 logs.go:276] 0 containers: []
	W0410 22:50:05.983853   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:05.983861   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:05.983945   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:06.024590   57719 cri.go:89] found id: ""
	I0410 22:50:06.024618   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.024626   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:06.024637   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:06.024698   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:06.063040   57719 cri.go:89] found id: ""
	I0410 22:50:06.063075   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.063087   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:06.063094   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:06.063160   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:06.102224   57719 cri.go:89] found id: ""
	I0410 22:50:06.102250   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.102259   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:06.102273   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:06.102342   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:06.144202   57719 cri.go:89] found id: ""
	I0410 22:50:06.144229   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.144236   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:06.144242   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:06.144288   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:06.189215   57719 cri.go:89] found id: ""
	I0410 22:50:06.189243   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.189250   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:06.189256   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:06.189308   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:06.225218   57719 cri.go:89] found id: ""
	I0410 22:50:06.225247   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.225258   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:06.225266   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:06.225330   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:06.265229   57719 cri.go:89] found id: ""
	I0410 22:50:06.265262   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.265273   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:06.265283   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:06.265306   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:06.279794   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:06.279825   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:06.348038   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:06.348063   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:06.348079   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:06.431293   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:06.431339   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:06.476033   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:06.476060   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:09.032099   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:09.046628   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:09.046765   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:09.086900   57719 cri.go:89] found id: ""
	I0410 22:50:09.086928   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.086936   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:09.086942   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:09.086998   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:09.124989   57719 cri.go:89] found id: ""
	I0410 22:50:09.125018   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.125028   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:09.125035   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:09.125096   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:09.163720   57719 cri.go:89] found id: ""
	I0410 22:50:09.163749   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.163761   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:09.163769   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:09.163822   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:09.203846   57719 cri.go:89] found id: ""
	I0410 22:50:09.203875   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.203883   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:09.203888   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:09.203945   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:09.242974   57719 cri.go:89] found id: ""
	I0410 22:50:09.243002   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.243016   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:09.243024   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:09.243092   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:09.278664   57719 cri.go:89] found id: ""
	I0410 22:50:09.278687   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.278694   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:09.278700   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:09.278762   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:09.313335   57719 cri.go:89] found id: ""
	I0410 22:50:09.313359   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.313367   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:09.313372   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:09.313419   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:09.351160   57719 cri.go:89] found id: ""
	I0410 22:50:09.351195   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.351206   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:09.351225   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:09.351239   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:09.425989   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:09.426015   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:09.426033   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:09.505189   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:09.505223   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:09.549619   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:09.549651   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:09.604322   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:09.604360   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:08.520115   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:11.018253   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:08.649190   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:10.650453   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:10.903726   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:13.401154   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:12.119780   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:12.135377   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:12.135458   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:12.178105   57719 cri.go:89] found id: ""
	I0410 22:50:12.178129   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.178138   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:12.178144   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:12.178207   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:12.217369   57719 cri.go:89] found id: ""
	I0410 22:50:12.217397   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.217409   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:12.217424   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:12.217488   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:12.254185   57719 cri.go:89] found id: ""
	I0410 22:50:12.254213   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.254222   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:12.254230   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:12.254291   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:12.295007   57719 cri.go:89] found id: ""
	I0410 22:50:12.295038   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.295048   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:12.295057   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:12.295125   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:12.334620   57719 cri.go:89] found id: ""
	I0410 22:50:12.334644   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.334651   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:12.334657   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:12.334707   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:12.371217   57719 cri.go:89] found id: ""
	I0410 22:50:12.371241   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.371249   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:12.371255   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:12.371302   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:12.409571   57719 cri.go:89] found id: ""
	I0410 22:50:12.409599   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.409608   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:12.409617   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:12.409675   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:12.453133   57719 cri.go:89] found id: ""
	I0410 22:50:12.453159   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.453169   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:12.453180   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:12.453194   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:12.505322   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:12.505360   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:12.520284   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:12.520315   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:12.608057   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:12.608082   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:12.608097   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:12.693240   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:12.693274   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:15.244628   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:15.261915   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:15.262020   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:15.302874   57719 cri.go:89] found id: ""
	I0410 22:50:15.302903   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.302910   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:15.302916   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:15.302973   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:15.347492   57719 cri.go:89] found id: ""
	I0410 22:50:15.347518   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.347527   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:15.347534   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:15.347598   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:15.394156   57719 cri.go:89] found id: ""
	I0410 22:50:15.394188   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.394198   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:15.394205   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:15.394265   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:13.518316   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:15.520507   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:13.150145   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:15.651083   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:15.401582   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:17.901179   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:15.437656   57719 cri.go:89] found id: ""
	I0410 22:50:15.437682   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.437690   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:15.437695   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:15.437748   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:15.475658   57719 cri.go:89] found id: ""
	I0410 22:50:15.475686   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.475697   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:15.475704   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:15.475765   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:15.517908   57719 cri.go:89] found id: ""
	I0410 22:50:15.517930   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.517937   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:15.517942   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:15.517991   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:15.560083   57719 cri.go:89] found id: ""
	I0410 22:50:15.560108   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.560117   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:15.560123   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:15.560178   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:15.603967   57719 cri.go:89] found id: ""
	I0410 22:50:15.603994   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.604002   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:15.604013   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:15.604028   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:15.659994   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:15.660029   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:15.675627   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:15.675658   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:15.761297   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:15.761320   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:15.761339   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:15.839225   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:15.839265   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:18.386062   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:18.399609   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:18.399677   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:18.443002   57719 cri.go:89] found id: ""
	I0410 22:50:18.443030   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.443040   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:18.443048   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:18.443106   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:18.485089   57719 cri.go:89] found id: ""
	I0410 22:50:18.485121   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.485132   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:18.485140   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:18.485200   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:18.524310   57719 cri.go:89] found id: ""
	I0410 22:50:18.524338   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.524347   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:18.524354   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:18.524412   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:18.563535   57719 cri.go:89] found id: ""
	I0410 22:50:18.563573   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.563582   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:18.563587   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:18.563634   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:18.600451   57719 cri.go:89] found id: ""
	I0410 22:50:18.600478   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.600487   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:18.600495   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:18.600562   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:18.640445   57719 cri.go:89] found id: ""
	I0410 22:50:18.640472   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.640480   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:18.640485   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:18.640550   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:18.677691   57719 cri.go:89] found id: ""
	I0410 22:50:18.677725   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.677746   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:18.677754   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:18.677817   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:18.716753   57719 cri.go:89] found id: ""
	I0410 22:50:18.716850   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.716876   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:18.716897   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:18.716918   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:18.804099   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:18.804130   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:18.804144   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:18.883569   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:18.883611   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:18.930014   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:18.930045   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:18.980029   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:18.980065   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:18.018924   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:20.020820   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:18.151029   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:20.650000   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:19.904069   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:22.401462   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:24.401892   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:21.495499   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:21.511001   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:21.511075   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:21.551469   57719 cri.go:89] found id: ""
	I0410 22:50:21.551511   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.551522   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:21.551540   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:21.551605   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:21.590539   57719 cri.go:89] found id: ""
	I0410 22:50:21.590570   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.590580   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:21.590587   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:21.590654   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:21.629005   57719 cri.go:89] found id: ""
	I0410 22:50:21.629030   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.629042   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:21.629048   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:21.629108   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:21.669745   57719 cri.go:89] found id: ""
	I0410 22:50:21.669767   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.669774   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:21.669780   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:21.669834   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:21.707806   57719 cri.go:89] found id: ""
	I0410 22:50:21.707831   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.707839   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:21.707844   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:21.707892   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:21.746698   57719 cri.go:89] found id: ""
	I0410 22:50:21.746727   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.746736   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:21.746742   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:21.746802   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:21.783048   57719 cri.go:89] found id: ""
	I0410 22:50:21.783070   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.783079   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:21.783084   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:21.783131   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:21.822457   57719 cri.go:89] found id: ""
	I0410 22:50:21.822484   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.822492   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:21.822500   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:21.822513   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:21.894706   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:21.894747   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:21.909861   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:21.909903   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:21.999344   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:21.999370   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:21.999386   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:22.080004   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:22.080042   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:24.620924   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:24.634937   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:24.634999   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:24.686619   57719 cri.go:89] found id: ""
	I0410 22:50:24.686644   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.686655   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:24.686662   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:24.686744   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:24.723632   57719 cri.go:89] found id: ""
	I0410 22:50:24.723658   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.723667   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:24.723675   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:24.723738   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:24.760708   57719 cri.go:89] found id: ""
	I0410 22:50:24.760739   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.760750   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:24.760757   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:24.760804   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:24.795680   57719 cri.go:89] found id: ""
	I0410 22:50:24.795712   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.795722   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:24.795729   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:24.795793   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:24.833033   57719 cri.go:89] found id: ""
	I0410 22:50:24.833063   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.833074   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:24.833082   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:24.833130   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:24.872840   57719 cri.go:89] found id: ""
	I0410 22:50:24.872864   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.872871   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:24.872877   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:24.872936   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:24.915640   57719 cri.go:89] found id: ""
	I0410 22:50:24.915678   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.915688   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:24.915696   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:24.915755   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:24.957164   57719 cri.go:89] found id: ""
	I0410 22:50:24.957207   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.957219   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:24.957230   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:24.957244   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:25.006551   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:25.006601   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:25.021623   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:25.021649   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:25.094699   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:25.094722   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:25.094741   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:25.181280   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:25.181316   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:22.518442   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:25.018206   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:22.650481   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:25.151162   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:26.904127   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:29.400642   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:27.723475   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:27.737294   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:27.737381   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:27.776098   57719 cri.go:89] found id: ""
	I0410 22:50:27.776126   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.776138   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:27.776146   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:27.776203   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:27.814324   57719 cri.go:89] found id: ""
	I0410 22:50:27.814352   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.814364   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:27.814371   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:27.814447   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:27.849573   57719 cri.go:89] found id: ""
	I0410 22:50:27.849603   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.849614   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:27.849621   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:27.849682   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:27.888904   57719 cri.go:89] found id: ""
	I0410 22:50:27.888932   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.888940   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:27.888946   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:27.888993   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:27.931772   57719 cri.go:89] found id: ""
	I0410 22:50:27.931800   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.931812   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:27.931821   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:27.931881   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:27.975633   57719 cri.go:89] found id: ""
	I0410 22:50:27.975666   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.975676   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:27.975684   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:27.975736   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:28.012251   57719 cri.go:89] found id: ""
	I0410 22:50:28.012280   57719 logs.go:276] 0 containers: []
	W0410 22:50:28.012290   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:28.012298   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:28.012364   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:28.048848   57719 cri.go:89] found id: ""
	I0410 22:50:28.048886   57719 logs.go:276] 0 containers: []
	W0410 22:50:28.048898   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:28.048908   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:28.048923   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:28.102215   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:28.102257   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:28.118052   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:28.118081   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:28.190738   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:28.190762   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:28.190777   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:28.269294   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:28.269330   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:27.519211   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:29.521111   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:32.017915   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:27.651922   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:30.150852   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:31.401210   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:33.902054   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:30.833927   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:30.848196   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:30.848266   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:30.886077   57719 cri.go:89] found id: ""
	I0410 22:50:30.886117   57719 logs.go:276] 0 containers: []
	W0410 22:50:30.886127   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:30.886133   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:30.886179   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:30.924638   57719 cri.go:89] found id: ""
	I0410 22:50:30.924668   57719 logs.go:276] 0 containers: []
	W0410 22:50:30.924678   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:30.924686   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:30.924762   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:30.961106   57719 cri.go:89] found id: ""
	I0410 22:50:30.961136   57719 logs.go:276] 0 containers: []
	W0410 22:50:30.961147   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:30.961154   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:30.961213   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:31.001374   57719 cri.go:89] found id: ""
	I0410 22:50:31.001412   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.001427   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:31.001434   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:31.001498   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:31.038928   57719 cri.go:89] found id: ""
	I0410 22:50:31.038961   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.038971   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:31.038980   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:31.039057   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:31.077033   57719 cri.go:89] found id: ""
	I0410 22:50:31.077067   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.077076   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:31.077083   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:31.077139   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:31.115227   57719 cri.go:89] found id: ""
	I0410 22:50:31.115257   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.115266   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:31.115273   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:31.115335   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:31.157339   57719 cri.go:89] found id: ""
	I0410 22:50:31.157372   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.157382   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:31.157393   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:31.157409   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:31.198742   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:31.198770   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:31.255388   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:31.255422   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:31.272018   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:31.272048   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:31.344503   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:31.344524   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:31.344541   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:33.925749   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:33.939402   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:33.939475   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:33.976070   57719 cri.go:89] found id: ""
	I0410 22:50:33.976093   57719 logs.go:276] 0 containers: []
	W0410 22:50:33.976100   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:33.976106   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:33.976172   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:34.013723   57719 cri.go:89] found id: ""
	I0410 22:50:34.013748   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.013758   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:34.013765   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:34.013821   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:34.062678   57719 cri.go:89] found id: ""
	I0410 22:50:34.062704   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.062712   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:34.062718   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:34.062774   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:34.123007   57719 cri.go:89] found id: ""
	I0410 22:50:34.123038   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.123046   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:34.123052   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:34.123096   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:34.188811   57719 cri.go:89] found id: ""
	I0410 22:50:34.188841   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.188852   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:34.188859   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:34.188949   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:34.223585   57719 cri.go:89] found id: ""
	I0410 22:50:34.223609   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.223618   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:34.223625   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:34.223680   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:34.260004   57719 cri.go:89] found id: ""
	I0410 22:50:34.260028   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.260036   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:34.260041   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:34.260096   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:34.303064   57719 cri.go:89] found id: ""
	I0410 22:50:34.303093   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.303104   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:34.303115   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:34.303134   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:34.359105   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:34.359142   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:34.375420   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:34.375450   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:34.449619   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:34.449645   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:34.449660   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:34.534214   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:34.534248   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:34.518609   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:37.016973   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:32.649917   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:34.661652   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:37.150648   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:36.401988   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:38.901505   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:37.076525   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:37.090789   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:37.090849   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:37.130848   57719 cri.go:89] found id: ""
	I0410 22:50:37.130881   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.130893   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:37.130900   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:37.130967   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:37.170158   57719 cri.go:89] found id: ""
	I0410 22:50:37.170181   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.170188   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:37.170194   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:37.170269   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:37.210238   57719 cri.go:89] found id: ""
	I0410 22:50:37.210264   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.210274   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:37.210282   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:37.210328   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:37.256763   57719 cri.go:89] found id: ""
	I0410 22:50:37.256789   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.256800   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:37.256807   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:37.256875   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:37.295323   57719 cri.go:89] found id: ""
	I0410 22:50:37.295355   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.295364   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:37.295372   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:37.295443   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:37.334066   57719 cri.go:89] found id: ""
	I0410 22:50:37.334094   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.334105   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:37.334113   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:37.334170   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:37.374428   57719 cri.go:89] found id: ""
	I0410 22:50:37.374458   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.374477   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:37.374485   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:37.374544   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:37.412114   57719 cri.go:89] found id: ""
	I0410 22:50:37.412142   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.412152   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:37.412161   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:37.412174   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:37.453693   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:37.453717   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:37.505484   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:37.505524   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:37.523645   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:37.523672   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:37.595107   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:37.595134   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:37.595150   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:40.180649   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:40.195168   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:40.195243   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:40.240130   57719 cri.go:89] found id: ""
	I0410 22:50:40.240160   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.240169   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:40.240175   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:40.240241   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:40.276366   57719 cri.go:89] found id: ""
	I0410 22:50:40.276390   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.276406   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:40.276412   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:40.276466   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:40.314991   57719 cri.go:89] found id: ""
	I0410 22:50:40.315016   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.315023   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:40.315029   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:40.315075   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:40.354301   57719 cri.go:89] found id: ""
	I0410 22:50:40.354331   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.354342   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:40.354349   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:40.354414   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:40.393093   57719 cri.go:89] found id: ""
	I0410 22:50:40.393125   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.393135   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:40.393143   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:40.393204   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:39.021170   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:41.518285   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:39.650047   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:42.150206   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:40.902024   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:42.904180   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:40.429641   57719 cri.go:89] found id: ""
	I0410 22:50:40.429665   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.429674   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:40.429680   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:40.429727   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:40.468184   57719 cri.go:89] found id: ""
	I0410 22:50:40.468213   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.468224   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:40.468232   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:40.468304   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:40.505586   57719 cri.go:89] found id: ""
	I0410 22:50:40.505616   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.505627   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:40.505637   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:40.505652   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:40.562078   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:40.562119   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:40.578135   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:40.578213   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:40.659018   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:40.659047   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:40.659061   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:40.746434   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:40.746478   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:43.287852   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:43.301797   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:43.301869   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:43.339778   57719 cri.go:89] found id: ""
	I0410 22:50:43.339813   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.339822   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:43.339829   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:43.339893   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:43.378716   57719 cri.go:89] found id: ""
	I0410 22:50:43.378748   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.378759   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:43.378767   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:43.378836   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:43.417128   57719 cri.go:89] found id: ""
	I0410 22:50:43.417152   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.417163   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:43.417171   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:43.417234   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:43.459577   57719 cri.go:89] found id: ""
	I0410 22:50:43.459608   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.459617   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:43.459623   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:43.459678   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:43.497519   57719 cri.go:89] found id: ""
	I0410 22:50:43.497551   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.497561   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:43.497566   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:43.497620   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:43.534400   57719 cri.go:89] found id: ""
	I0410 22:50:43.534433   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.534444   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:43.534451   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:43.534540   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:43.574213   57719 cri.go:89] found id: ""
	I0410 22:50:43.574242   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.574253   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:43.574283   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:43.574344   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:43.611078   57719 cri.go:89] found id: ""
	I0410 22:50:43.611106   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.611113   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:43.611121   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:43.611137   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:43.698166   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:43.698202   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:43.749368   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:43.749395   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:43.801584   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:43.801621   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:43.817012   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:43.817050   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:43.892325   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:43.518660   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:46.017804   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:44.650389   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:46.650560   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:45.401723   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:47.901852   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:46.393325   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:46.407985   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:46.408045   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:46.442704   57719 cri.go:89] found id: ""
	I0410 22:50:46.442735   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.442745   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:46.442753   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:46.442821   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:46.485582   57719 cri.go:89] found id: ""
	I0410 22:50:46.485611   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.485618   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:46.485625   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:46.485683   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:46.524199   57719 cri.go:89] found id: ""
	I0410 22:50:46.524227   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.524234   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:46.524240   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:46.524288   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:46.560655   57719 cri.go:89] found id: ""
	I0410 22:50:46.560685   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.560694   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:46.560701   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:46.560839   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:46.596617   57719 cri.go:89] found id: ""
	I0410 22:50:46.596646   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.596658   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:46.596666   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:46.596739   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:46.634316   57719 cri.go:89] found id: ""
	I0410 22:50:46.634339   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.634347   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:46.634352   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:46.634399   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:46.671466   57719 cri.go:89] found id: ""
	I0410 22:50:46.671493   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.671502   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:46.671509   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:46.671582   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:46.709228   57719 cri.go:89] found id: ""
	I0410 22:50:46.709254   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.709265   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:46.709275   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:46.709291   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:46.761329   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:46.761366   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:46.778265   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:46.778288   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:46.851092   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:46.851113   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:46.851125   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:46.929181   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:46.929223   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:49.471285   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:49.485474   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:49.485551   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:49.523799   57719 cri.go:89] found id: ""
	I0410 22:50:49.523826   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.523838   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:49.523846   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:49.523899   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:49.562102   57719 cri.go:89] found id: ""
	I0410 22:50:49.562129   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.562137   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:49.562143   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:49.562196   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:49.600182   57719 cri.go:89] found id: ""
	I0410 22:50:49.600204   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.600211   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:49.600216   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:49.600262   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:49.640002   57719 cri.go:89] found id: ""
	I0410 22:50:49.640028   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.640039   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:49.640047   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:49.640111   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:49.678815   57719 cri.go:89] found id: ""
	I0410 22:50:49.678847   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.678858   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:49.678866   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:49.678929   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:49.716933   57719 cri.go:89] found id: ""
	I0410 22:50:49.716959   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.716969   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:49.716976   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:49.717039   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:49.756018   57719 cri.go:89] found id: ""
	I0410 22:50:49.756050   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.756060   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:49.756068   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:49.756132   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:49.802066   57719 cri.go:89] found id: ""
	I0410 22:50:49.802094   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.802103   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:49.802110   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:49.802123   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:49.856363   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:49.856417   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:49.872297   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:49.872330   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:49.950152   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:49.950174   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:49.950185   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:50.031251   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:50.031291   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:48.517547   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:50.517942   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:49.150498   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:51.151491   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:50.401650   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:52.401866   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:52.574794   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:52.589052   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:52.589117   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:52.625911   57719 cri.go:89] found id: ""
	I0410 22:50:52.625941   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.625952   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:52.625960   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:52.626020   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:52.668749   57719 cri.go:89] found id: ""
	I0410 22:50:52.668773   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.668781   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:52.668787   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:52.668835   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:52.713420   57719 cri.go:89] found id: ""
	I0410 22:50:52.713447   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.713457   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:52.713473   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:52.713538   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:52.750265   57719 cri.go:89] found id: ""
	I0410 22:50:52.750294   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.750301   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:52.750307   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:52.750354   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:52.787552   57719 cri.go:89] found id: ""
	I0410 22:50:52.787586   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.787597   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:52.787604   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:52.787670   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:52.827988   57719 cri.go:89] found id: ""
	I0410 22:50:52.828013   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.828020   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:52.828026   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:52.828072   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:52.864115   57719 cri.go:89] found id: ""
	I0410 22:50:52.864144   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.864155   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:52.864161   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:52.864222   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:52.906673   57719 cri.go:89] found id: ""
	I0410 22:50:52.906702   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.906712   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:52.906723   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:52.906742   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:52.960842   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:52.960892   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:52.976084   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:52.976114   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:53.052612   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:53.052638   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:53.052656   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:53.132465   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:53.132518   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:53.018789   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:55.518169   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:53.154117   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:55.653267   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:54.903797   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:57.401445   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:55.676947   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:55.691098   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:55.691183   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:55.728711   57719 cri.go:89] found id: ""
	I0410 22:50:55.728740   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.728750   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:55.728758   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:55.728824   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:55.768540   57719 cri.go:89] found id: ""
	I0410 22:50:55.768568   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.768578   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:55.768584   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:55.768649   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:55.806901   57719 cri.go:89] found id: ""
	I0410 22:50:55.806928   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.806938   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:55.806945   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:55.807019   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:55.846777   57719 cri.go:89] found id: ""
	I0410 22:50:55.846807   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.846816   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:55.846822   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:55.846873   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:55.887143   57719 cri.go:89] found id: ""
	I0410 22:50:55.887172   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.887181   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:55.887186   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:55.887241   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:55.929008   57719 cri.go:89] found id: ""
	I0410 22:50:55.929032   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.929040   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:55.929046   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:55.929098   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:55.969496   57719 cri.go:89] found id: ""
	I0410 22:50:55.969526   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.969536   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:55.969544   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:55.969605   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:56.007786   57719 cri.go:89] found id: ""
	I0410 22:50:56.007818   57719 logs.go:276] 0 containers: []
	W0410 22:50:56.007828   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:56.007838   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:56.007854   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:56.061616   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:56.061653   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:56.078664   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:56.078689   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:56.165015   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:56.165037   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:56.165053   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:56.241928   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:56.241971   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:58.785955   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:58.799544   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:58.799604   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:58.837234   57719 cri.go:89] found id: ""
	I0410 22:50:58.837264   57719 logs.go:276] 0 containers: []
	W0410 22:50:58.837275   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:58.837283   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:58.837350   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:58.877818   57719 cri.go:89] found id: ""
	I0410 22:50:58.877854   57719 logs.go:276] 0 containers: []
	W0410 22:50:58.877861   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:58.877867   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:58.877921   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:58.919705   57719 cri.go:89] found id: ""
	I0410 22:50:58.919729   57719 logs.go:276] 0 containers: []
	W0410 22:50:58.919740   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:58.919747   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:58.919809   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:58.957995   57719 cri.go:89] found id: ""
	I0410 22:50:58.958020   57719 logs.go:276] 0 containers: []
	W0410 22:50:58.958029   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:58.958036   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:58.958091   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:58.999966   57719 cri.go:89] found id: ""
	I0410 22:50:58.999995   57719 logs.go:276] 0 containers: []
	W0410 22:50:59.000008   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:59.000016   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:59.000088   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:59.040516   57719 cri.go:89] found id: ""
	I0410 22:50:59.040541   57719 logs.go:276] 0 containers: []
	W0410 22:50:59.040552   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:59.040560   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:59.040623   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:59.078869   57719 cri.go:89] found id: ""
	I0410 22:50:59.078899   57719 logs.go:276] 0 containers: []
	W0410 22:50:59.078908   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:59.078913   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:59.078961   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:59.116637   57719 cri.go:89] found id: ""
	I0410 22:50:59.116663   57719 logs.go:276] 0 containers: []
	W0410 22:50:59.116670   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:59.116679   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:59.116697   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:59.195852   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:59.195892   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:59.243256   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:59.243282   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:59.299195   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:59.299263   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:59.314512   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:59.314537   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:59.386468   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:58.016995   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:00.018205   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:58.151543   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:00.650140   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:59.901858   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:01.902933   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:04.402128   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:01.886907   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:01.905169   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:01.905251   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:01.944154   57719 cri.go:89] found id: ""
	I0410 22:51:01.944187   57719 logs.go:276] 0 containers: []
	W0410 22:51:01.944198   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:01.944205   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:01.944268   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:01.982743   57719 cri.go:89] found id: ""
	I0410 22:51:01.982778   57719 logs.go:276] 0 containers: []
	W0410 22:51:01.982789   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:01.982797   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:01.982864   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:02.020072   57719 cri.go:89] found id: ""
	I0410 22:51:02.020094   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.020102   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:02.020159   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:02.020213   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:02.064250   57719 cri.go:89] found id: ""
	I0410 22:51:02.064273   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.064280   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:02.064286   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:02.064339   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:02.105013   57719 cri.go:89] found id: ""
	I0410 22:51:02.105045   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.105054   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:02.105060   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:02.105106   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:02.145664   57719 cri.go:89] found id: ""
	I0410 22:51:02.145689   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.145695   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:02.145701   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:02.145759   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:02.189752   57719 cri.go:89] found id: ""
	I0410 22:51:02.189831   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.189850   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:02.189857   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:02.189929   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:02.228315   57719 cri.go:89] found id: ""
	I0410 22:51:02.228347   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.228358   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:02.228374   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:02.228390   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:02.281425   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:02.281460   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:02.296003   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:02.296031   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:02.389572   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:02.389599   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:02.389613   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:02.475881   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:02.475916   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:05.022037   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:05.037242   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:05.037304   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:05.073656   57719 cri.go:89] found id: ""
	I0410 22:51:05.073687   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.073698   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:05.073705   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:05.073767   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:05.114321   57719 cri.go:89] found id: ""
	I0410 22:51:05.114348   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.114356   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:05.114361   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:05.114430   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:05.153119   57719 cri.go:89] found id: ""
	I0410 22:51:05.153156   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.153164   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:05.153170   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:05.153230   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:05.193393   57719 cri.go:89] found id: ""
	I0410 22:51:05.193420   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.193428   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:05.193433   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:05.193479   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:05.229826   57719 cri.go:89] found id: ""
	I0410 22:51:05.229853   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.229861   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:05.229867   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:05.229915   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:05.265511   57719 cri.go:89] found id: ""
	I0410 22:51:05.265544   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.265555   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:05.265562   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:05.265627   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:05.302257   57719 cri.go:89] found id: ""
	I0410 22:51:05.302287   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.302297   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:05.302305   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:05.302386   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:05.347344   57719 cri.go:89] found id: ""
	I0410 22:51:05.347372   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.347380   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:05.347388   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:05.347399   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:05.421796   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:05.421817   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:05.421829   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:02.521499   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:05.017660   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:07.017945   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:02.651104   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:05.150286   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:07.150565   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:06.402266   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:08.406456   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:05.501803   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:05.501839   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:05.549161   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:05.549195   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:05.599598   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:05.599633   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:08.115679   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:08.130273   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:08.130350   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:08.172302   57719 cri.go:89] found id: ""
	I0410 22:51:08.172328   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.172335   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:08.172342   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:08.172390   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:08.220789   57719 cri.go:89] found id: ""
	I0410 22:51:08.220812   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.220819   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:08.220825   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:08.220874   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:08.258299   57719 cri.go:89] found id: ""
	I0410 22:51:08.258328   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.258341   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:08.258349   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:08.258404   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:08.297698   57719 cri.go:89] found id: ""
	I0410 22:51:08.297726   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.297733   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:08.297739   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:08.297787   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:08.335564   57719 cri.go:89] found id: ""
	I0410 22:51:08.335595   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.335605   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:08.335613   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:08.335671   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:08.373340   57719 cri.go:89] found id: ""
	I0410 22:51:08.373367   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.373377   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:08.373384   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:08.373481   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:08.413961   57719 cri.go:89] found id: ""
	I0410 22:51:08.413984   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.413993   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:08.414001   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:08.414062   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:08.459449   57719 cri.go:89] found id: ""
	I0410 22:51:08.459481   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.459492   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:08.459505   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:08.459521   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:08.518061   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:08.518103   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:08.533653   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:08.533680   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:08.619882   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:08.619917   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:08.619932   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:08.696329   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:08.696364   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:09.518298   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:11.518877   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:09.650387   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:11.650614   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:10.902634   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:13.402009   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:11.256846   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:11.271521   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:11.271582   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:11.312829   57719 cri.go:89] found id: ""
	I0410 22:51:11.312851   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.312869   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:11.312876   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:11.312930   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:11.355183   57719 cri.go:89] found id: ""
	I0410 22:51:11.355210   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.355220   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:11.355227   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:11.355287   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:11.394345   57719 cri.go:89] found id: ""
	I0410 22:51:11.394376   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.394388   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:11.394396   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:11.394460   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:11.434128   57719 cri.go:89] found id: ""
	I0410 22:51:11.434155   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.434163   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:11.434169   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:11.434219   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:11.473160   57719 cri.go:89] found id: ""
	I0410 22:51:11.473189   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.473201   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:11.473208   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:11.473278   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:11.513782   57719 cri.go:89] found id: ""
	I0410 22:51:11.513815   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.513826   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:11.513835   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:11.513891   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:11.556057   57719 cri.go:89] found id: ""
	I0410 22:51:11.556085   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.556093   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:11.556100   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:11.556147   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:11.594557   57719 cri.go:89] found id: ""
	I0410 22:51:11.594579   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.594586   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:11.594594   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:11.594609   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:11.672795   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:11.672841   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:11.716011   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:11.716046   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:11.769372   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:11.769413   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:11.784589   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:11.784617   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:11.857051   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:14.358019   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:14.372116   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:14.372192   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:14.412020   57719 cri.go:89] found id: ""
	I0410 22:51:14.412049   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.412061   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:14.412068   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:14.412128   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:14.450317   57719 cri.go:89] found id: ""
	I0410 22:51:14.450349   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.450360   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:14.450368   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:14.450426   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:14.509080   57719 cri.go:89] found id: ""
	I0410 22:51:14.509104   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.509110   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:14.509116   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:14.509185   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:14.561540   57719 cri.go:89] found id: ""
	I0410 22:51:14.561572   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.561583   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:14.561590   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:14.561670   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:14.622498   57719 cri.go:89] found id: ""
	I0410 22:51:14.622528   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.622538   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:14.622546   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:14.622606   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:14.678451   57719 cri.go:89] found id: ""
	I0410 22:51:14.678481   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.678490   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:14.678498   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:14.678560   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:14.720264   57719 cri.go:89] found id: ""
	I0410 22:51:14.720302   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.720315   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:14.720323   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:14.720388   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:14.758039   57719 cri.go:89] found id: ""
	I0410 22:51:14.758063   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.758071   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:14.758079   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:14.758090   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:14.808111   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:14.808171   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:14.825444   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:14.825487   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:14.906859   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:14.906884   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:14.906899   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:14.995176   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:14.995225   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:14.017397   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:16.017624   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:14.149898   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:16.150320   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:15.901542   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:17.902391   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:17.541159   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:17.556679   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:17.556749   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:17.595839   57719 cri.go:89] found id: ""
	I0410 22:51:17.595869   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.595880   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:17.595895   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:17.595954   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:17.633921   57719 cri.go:89] found id: ""
	I0410 22:51:17.633947   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.633957   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:17.633964   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:17.634033   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:17.673467   57719 cri.go:89] found id: ""
	I0410 22:51:17.673493   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.673501   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:17.673507   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:17.673554   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:17.709631   57719 cri.go:89] found id: ""
	I0410 22:51:17.709660   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.709670   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:17.709679   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:17.709739   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:17.760852   57719 cri.go:89] found id: ""
	I0410 22:51:17.760880   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.760893   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:17.760908   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:17.760969   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:17.798074   57719 cri.go:89] found id: ""
	I0410 22:51:17.798099   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.798108   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:17.798117   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:17.798178   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:17.835807   57719 cri.go:89] found id: ""
	I0410 22:51:17.835839   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.835854   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:17.835863   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:17.835935   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:17.876812   57719 cri.go:89] found id: ""
	I0410 22:51:17.876846   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.876856   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:17.876868   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:17.876882   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:17.891121   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:17.891149   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:17.966241   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:17.966264   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:17.966277   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:18.042633   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:18.042667   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:18.088294   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:18.088327   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:18.518103   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:20.519397   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:18.650784   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:21.150770   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:20.403127   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:22.901329   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:20.647016   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:20.662573   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:20.662640   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:20.701147   57719 cri.go:89] found id: ""
	I0410 22:51:20.701173   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.701184   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:20.701191   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:20.701252   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:20.739005   57719 cri.go:89] found id: ""
	I0410 22:51:20.739038   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.739049   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:20.739057   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:20.739112   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:20.776335   57719 cri.go:89] found id: ""
	I0410 22:51:20.776365   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.776379   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:20.776386   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:20.776471   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:20.814755   57719 cri.go:89] found id: ""
	I0410 22:51:20.814789   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.814800   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:20.814808   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:20.814867   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:20.853872   57719 cri.go:89] found id: ""
	I0410 22:51:20.853897   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.853904   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:20.853910   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:20.853958   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:20.891616   57719 cri.go:89] found id: ""
	I0410 22:51:20.891648   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.891656   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:20.891662   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:20.891710   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:20.930285   57719 cri.go:89] found id: ""
	I0410 22:51:20.930316   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.930326   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:20.930341   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:20.930398   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:20.967857   57719 cri.go:89] found id: ""
	I0410 22:51:20.967894   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.967904   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:20.967913   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:20.967934   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:21.053166   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:21.053201   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:21.098860   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:21.098888   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:21.150395   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:21.150430   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:21.164707   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:21.164737   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:21.251010   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:23.751441   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:23.769949   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:23.770014   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:23.809652   57719 cri.go:89] found id: ""
	I0410 22:51:23.809678   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.809686   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:23.809692   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:23.809740   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:23.847331   57719 cri.go:89] found id: ""
	I0410 22:51:23.847364   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.847374   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:23.847383   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:23.847445   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:23.889459   57719 cri.go:89] found id: ""
	I0410 22:51:23.889488   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.889498   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:23.889505   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:23.889564   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:23.932683   57719 cri.go:89] found id: ""
	I0410 22:51:23.932712   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.932720   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:23.932727   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:23.932787   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:23.974161   57719 cri.go:89] found id: ""
	I0410 22:51:23.974187   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.974194   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:23.974200   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:23.974253   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:24.013058   57719 cri.go:89] found id: ""
	I0410 22:51:24.013087   57719 logs.go:276] 0 containers: []
	W0410 22:51:24.013098   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:24.013106   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:24.013169   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:24.052556   57719 cri.go:89] found id: ""
	I0410 22:51:24.052582   57719 logs.go:276] 0 containers: []
	W0410 22:51:24.052590   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:24.052596   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:24.052643   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:24.089940   57719 cri.go:89] found id: ""
	I0410 22:51:24.089967   57719 logs.go:276] 0 containers: []
	W0410 22:51:24.089974   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:24.089982   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:24.089992   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:24.133198   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:24.133226   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:24.186615   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:24.186651   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:24.200559   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:24.200586   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:24.277061   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:24.277093   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:24.277109   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:23.016887   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:25.018325   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:27.018514   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:23.650669   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:26.149198   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:24.901704   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:26.902227   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:28.902337   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:26.855354   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:26.870269   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:26.870329   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:26.910056   57719 cri.go:89] found id: ""
	I0410 22:51:26.910084   57719 logs.go:276] 0 containers: []
	W0410 22:51:26.910094   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:26.910101   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:26.910163   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:26.949646   57719 cri.go:89] found id: ""
	I0410 22:51:26.949674   57719 logs.go:276] 0 containers: []
	W0410 22:51:26.949684   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:26.949690   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:26.949759   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:26.990945   57719 cri.go:89] found id: ""
	I0410 22:51:26.990970   57719 logs.go:276] 0 containers: []
	W0410 22:51:26.990977   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:26.990984   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:26.991053   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:27.029464   57719 cri.go:89] found id: ""
	I0410 22:51:27.029491   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.029500   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:27.029505   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:27.029562   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:27.072194   57719 cri.go:89] found id: ""
	I0410 22:51:27.072235   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.072260   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:27.072270   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:27.072339   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:27.106942   57719 cri.go:89] found id: ""
	I0410 22:51:27.106969   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.106979   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:27.106985   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:27.107045   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:27.144851   57719 cri.go:89] found id: ""
	I0410 22:51:27.144885   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.144894   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:27.144909   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:27.144970   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:27.188138   57719 cri.go:89] found id: ""
	I0410 22:51:27.188166   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.188178   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:27.188189   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:27.188204   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:27.241911   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:27.241943   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:27.255296   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:27.255322   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:27.327638   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:27.327663   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:27.327678   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:27.409048   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:27.409083   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:29.960093   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:29.975583   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:29.975647   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:30.018120   57719 cri.go:89] found id: ""
	I0410 22:51:30.018149   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.018159   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:30.018166   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:30.018225   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:30.055487   57719 cri.go:89] found id: ""
	I0410 22:51:30.055511   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.055518   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:30.055524   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:30.055573   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:30.093723   57719 cri.go:89] found id: ""
	I0410 22:51:30.093749   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.093756   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:30.093761   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:30.093808   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:30.138278   57719 cri.go:89] found id: ""
	I0410 22:51:30.138306   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.138317   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:30.138324   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:30.138385   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:30.174454   57719 cri.go:89] found id: ""
	I0410 22:51:30.174484   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.174495   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:30.174502   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:30.174573   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:30.213189   57719 cri.go:89] found id: ""
	I0410 22:51:30.213214   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.213221   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:30.213227   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:30.213272   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:30.253264   57719 cri.go:89] found id: ""
	I0410 22:51:30.253294   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.253304   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:30.253309   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:30.253357   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:30.289729   57719 cri.go:89] found id: ""
	I0410 22:51:30.289755   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.289767   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:30.289777   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:30.289793   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:30.303387   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:30.303416   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:30.381294   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:30.381315   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:30.381331   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:29.019226   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:31.519681   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:28.150621   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:30.649807   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:30.903662   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:33.401827   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:30.468072   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:30.468110   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:30.508761   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:30.508794   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:33.061654   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:33.077072   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:33.077146   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:33.113753   57719 cri.go:89] found id: ""
	I0410 22:51:33.113781   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.113791   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:33.113798   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:33.113848   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:33.149212   57719 cri.go:89] found id: ""
	I0410 22:51:33.149238   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.149249   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:33.149256   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:33.149321   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:33.185619   57719 cri.go:89] found id: ""
	I0410 22:51:33.185649   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.185659   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:33.185667   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:33.185725   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:33.222270   57719 cri.go:89] found id: ""
	I0410 22:51:33.222301   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.222313   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:33.222320   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:33.222375   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:33.258594   57719 cri.go:89] found id: ""
	I0410 22:51:33.258624   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.258636   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:33.258642   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:33.258689   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:33.298326   57719 cri.go:89] found id: ""
	I0410 22:51:33.298360   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.298368   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:33.298374   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:33.298438   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:33.337407   57719 cri.go:89] found id: ""
	I0410 22:51:33.337438   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.337449   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:33.337456   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:33.337520   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:33.374971   57719 cri.go:89] found id: ""
	I0410 22:51:33.375003   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.375014   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:33.375024   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:33.375039   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:33.415256   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:33.415288   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:33.467895   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:33.467929   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:33.484604   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:33.484639   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:33.562267   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:33.562288   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:33.562299   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:34.017685   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:36.519093   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:32.650396   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:35.150200   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:35.902810   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:38.401463   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:36.142628   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:36.157825   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:36.157883   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:36.199418   57719 cri.go:89] found id: ""
	I0410 22:51:36.199446   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.199456   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:36.199463   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:36.199523   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:36.238136   57719 cri.go:89] found id: ""
	I0410 22:51:36.238166   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.238174   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:36.238180   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:36.238229   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:36.273995   57719 cri.go:89] found id: ""
	I0410 22:51:36.274026   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.274037   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:36.274049   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:36.274110   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:36.311007   57719 cri.go:89] found id: ""
	I0410 22:51:36.311039   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.311049   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:36.311057   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:36.311122   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:36.351062   57719 cri.go:89] found id: ""
	I0410 22:51:36.351086   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.351093   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:36.351099   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:36.351152   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:36.388660   57719 cri.go:89] found id: ""
	I0410 22:51:36.388689   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.388703   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:36.388711   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:36.388762   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:36.428715   57719 cri.go:89] found id: ""
	I0410 22:51:36.428753   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.428761   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:36.428767   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:36.428831   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:36.467186   57719 cri.go:89] found id: ""
	I0410 22:51:36.467213   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.467220   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:36.467228   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:36.467239   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:36.521831   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:36.521860   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:36.536929   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:36.536957   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:36.614624   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:36.614647   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:36.614659   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:36.694604   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:36.694646   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:39.240039   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:39.255177   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:39.255262   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:39.293063   57719 cri.go:89] found id: ""
	I0410 22:51:39.293091   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.293113   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:39.293120   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:39.293181   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:39.331603   57719 cri.go:89] found id: ""
	I0410 22:51:39.331631   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.331639   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:39.331645   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:39.331697   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:39.372881   57719 cri.go:89] found id: ""
	I0410 22:51:39.372908   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.372919   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:39.372926   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:39.372987   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:39.417399   57719 cri.go:89] found id: ""
	I0410 22:51:39.417425   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.417435   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:39.417442   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:39.417503   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:39.458836   57719 cri.go:89] found id: ""
	I0410 22:51:39.458868   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.458877   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:39.458882   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:39.458932   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:39.496436   57719 cri.go:89] found id: ""
	I0410 22:51:39.496460   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.496467   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:39.496474   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:39.496532   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:39.534649   57719 cri.go:89] found id: ""
	I0410 22:51:39.534681   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.534690   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:39.534695   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:39.534754   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:39.571677   57719 cri.go:89] found id: ""
	I0410 22:51:39.571698   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.571705   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:39.571714   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:39.571725   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:39.621445   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:39.621482   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:39.676341   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:39.676382   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:39.691543   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:39.691573   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:39.769452   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:39.769477   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:39.769493   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:39.017483   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:41.020027   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:37.651534   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:40.151404   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:40.401635   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:42.401931   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:44.401972   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:42.350823   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:42.367124   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:42.367199   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:42.407511   57719 cri.go:89] found id: ""
	I0410 22:51:42.407545   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.407554   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:42.407560   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:42.407622   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:42.442913   57719 cri.go:89] found id: ""
	I0410 22:51:42.442948   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.442958   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:42.442964   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:42.443027   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:42.480747   57719 cri.go:89] found id: ""
	I0410 22:51:42.480777   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.480786   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:42.480792   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:42.480846   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:42.521610   57719 cri.go:89] found id: ""
	I0410 22:51:42.521635   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.521644   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:42.521651   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:42.521698   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:42.561076   57719 cri.go:89] found id: ""
	I0410 22:51:42.561108   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.561119   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:42.561127   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:42.561189   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:42.598034   57719 cri.go:89] found id: ""
	I0410 22:51:42.598059   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.598066   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:42.598072   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:42.598129   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:42.637051   57719 cri.go:89] found id: ""
	I0410 22:51:42.637085   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.637095   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:42.637103   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:42.637162   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:42.676051   57719 cri.go:89] found id: ""
	I0410 22:51:42.676084   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.676094   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:42.676105   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:42.676120   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:42.719607   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:42.719634   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:42.770791   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:42.770829   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:42.785704   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:42.785730   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:42.876445   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:42.876475   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:42.876490   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:43.518453   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:46.019450   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:42.650486   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:44.650894   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:47.150370   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:46.901358   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:48.902417   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:45.458721   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:45.474125   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:45.474203   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:45.511105   57719 cri.go:89] found id: ""
	I0410 22:51:45.511143   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.511153   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:45.511161   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:45.511220   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:45.552891   57719 cri.go:89] found id: ""
	I0410 22:51:45.552916   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.552924   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:45.552930   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:45.552986   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:45.592423   57719 cri.go:89] found id: ""
	I0410 22:51:45.592458   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.592474   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:45.592481   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:45.592542   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:45.630964   57719 cri.go:89] found id: ""
	I0410 22:51:45.631009   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.631026   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:45.631033   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:45.631098   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:45.669557   57719 cri.go:89] found id: ""
	I0410 22:51:45.669586   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.669595   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:45.669602   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:45.669702   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:45.706359   57719 cri.go:89] found id: ""
	I0410 22:51:45.706387   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.706395   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:45.706402   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:45.706463   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:45.743301   57719 cri.go:89] found id: ""
	I0410 22:51:45.743330   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.743337   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:45.743343   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:45.743390   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:45.781679   57719 cri.go:89] found id: ""
	I0410 22:51:45.781703   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.781711   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:45.781718   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:45.781730   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:45.835251   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:45.835286   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:45.849255   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:45.849284   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:45.918404   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:45.918436   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:45.918452   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:45.999556   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:45.999591   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:48.546421   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:48.561243   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:48.561314   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:48.618335   57719 cri.go:89] found id: ""
	I0410 22:51:48.618361   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.618369   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:48.618375   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:48.618445   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:48.656116   57719 cri.go:89] found id: ""
	I0410 22:51:48.656151   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.656160   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:48.656167   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:48.656222   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:48.694846   57719 cri.go:89] found id: ""
	I0410 22:51:48.694874   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.694884   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:48.694897   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:48.694971   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:48.731988   57719 cri.go:89] found id: ""
	I0410 22:51:48.732020   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.732031   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:48.732039   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:48.732102   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:48.768595   57719 cri.go:89] found id: ""
	I0410 22:51:48.768627   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.768636   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:48.768643   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:48.768708   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:48.807263   57719 cri.go:89] found id: ""
	I0410 22:51:48.807292   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.807302   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:48.807308   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:48.807366   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:48.845291   57719 cri.go:89] found id: ""
	I0410 22:51:48.845317   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.845325   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:48.845329   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:48.845399   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:48.891056   57719 cri.go:89] found id: ""
	I0410 22:51:48.891081   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.891091   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:48.891102   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:48.891117   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:48.931963   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:48.931992   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:48.985539   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:48.985579   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:49.000685   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:49.000716   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:49.076097   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:49.076127   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:49.076143   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:48.517879   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:51.018479   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:49.150511   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:51.650519   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:51.400971   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:53.401596   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:51.663336   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:51.678249   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:51.678315   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:51.720062   57719 cri.go:89] found id: ""
	I0410 22:51:51.720088   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.720096   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:51.720103   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:51.720164   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:51.766351   57719 cri.go:89] found id: ""
	I0410 22:51:51.766387   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.766395   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:51.766401   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:51.766448   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:51.813037   57719 cri.go:89] found id: ""
	I0410 22:51:51.813068   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.813080   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:51.813087   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:51.813150   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:51.849232   57719 cri.go:89] found id: ""
	I0410 22:51:51.849262   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.849273   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:51.849280   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:51.849346   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:51.886392   57719 cri.go:89] found id: ""
	I0410 22:51:51.886415   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.886422   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:51.886428   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:51.886485   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:51.930859   57719 cri.go:89] found id: ""
	I0410 22:51:51.930896   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.930905   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:51.930913   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:51.930978   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:51.970403   57719 cri.go:89] found id: ""
	I0410 22:51:51.970501   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.970524   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:51.970533   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:51.970599   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:52.008281   57719 cri.go:89] found id: ""
	I0410 22:51:52.008311   57719 logs.go:276] 0 containers: []
	W0410 22:51:52.008322   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:52.008333   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:52.008347   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:52.060623   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:52.060656   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:52.075529   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:52.075559   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:52.158330   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:52.158356   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:52.158371   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:52.236356   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:52.236392   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:54.782448   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:54.796928   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:54.796997   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:54.836297   57719 cri.go:89] found id: ""
	I0410 22:51:54.836326   57719 logs.go:276] 0 containers: []
	W0410 22:51:54.836335   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:54.836341   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:54.836390   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:54.873501   57719 cri.go:89] found id: ""
	I0410 22:51:54.873532   57719 logs.go:276] 0 containers: []
	W0410 22:51:54.873540   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:54.873547   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:54.873617   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:54.914200   57719 cri.go:89] found id: ""
	I0410 22:51:54.914227   57719 logs.go:276] 0 containers: []
	W0410 22:51:54.914238   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:54.914247   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:54.914308   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:54.958654   57719 cri.go:89] found id: ""
	I0410 22:51:54.958682   57719 logs.go:276] 0 containers: []
	W0410 22:51:54.958693   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:54.958702   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:54.958761   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:55.017032   57719 cri.go:89] found id: ""
	I0410 22:51:55.017078   57719 logs.go:276] 0 containers: []
	W0410 22:51:55.017090   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:55.017101   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:55.017167   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:55.093024   57719 cri.go:89] found id: ""
	I0410 22:51:55.093059   57719 logs.go:276] 0 containers: []
	W0410 22:51:55.093070   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:55.093085   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:55.093156   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:55.142412   57719 cri.go:89] found id: ""
	I0410 22:51:55.142441   57719 logs.go:276] 0 containers: []
	W0410 22:51:55.142456   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:55.142464   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:55.142521   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:55.180116   57719 cri.go:89] found id: ""
	I0410 22:51:55.180147   57719 logs.go:276] 0 containers: []
	W0410 22:51:55.180159   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:55.180169   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:55.180186   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:55.249118   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:55.249139   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:55.249153   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:55.327558   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:55.327597   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:55.373127   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:55.373163   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:53.518589   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:56.017080   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:54.151372   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:56.650238   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:55.401716   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:57.902174   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:55.431602   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:55.431647   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:57.947559   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:57.962916   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:57.962983   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:58.000955   57719 cri.go:89] found id: ""
	I0410 22:51:58.000983   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.000990   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:58.000997   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:58.001049   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:58.040556   57719 cri.go:89] found id: ""
	I0410 22:51:58.040579   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.040586   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:58.040592   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:58.040649   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:58.079121   57719 cri.go:89] found id: ""
	I0410 22:51:58.079148   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.079155   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:58.079161   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:58.079240   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:58.119876   57719 cri.go:89] found id: ""
	I0410 22:51:58.119902   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.119914   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:58.119929   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:58.119987   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:58.160130   57719 cri.go:89] found id: ""
	I0410 22:51:58.160162   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.160173   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:58.160181   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:58.160258   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:58.198162   57719 cri.go:89] found id: ""
	I0410 22:51:58.198195   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.198207   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:58.198215   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:58.198266   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:58.235049   57719 cri.go:89] found id: ""
	I0410 22:51:58.235078   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.235089   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:58.235096   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:58.235157   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:58.275786   57719 cri.go:89] found id: ""
	I0410 22:51:58.275825   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.275845   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:58.275856   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:58.275872   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:58.316246   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:58.316277   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:58.371614   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:58.371649   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:58.386610   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:58.386646   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:58.465167   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:58.465187   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:58.465199   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:58.018362   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:00.517710   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:59.152119   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:01.650566   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:00.401148   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:02.401494   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:04.401624   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:01.049405   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:01.073251   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:01.073328   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:01.125169   57719 cri.go:89] found id: ""
	I0410 22:52:01.125201   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.125212   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:01.125220   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:01.125289   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:01.171256   57719 cri.go:89] found id: ""
	I0410 22:52:01.171289   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.171300   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:01.171308   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:01.171376   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:01.210444   57719 cri.go:89] found id: ""
	I0410 22:52:01.210478   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.210489   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:01.210503   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:01.210568   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:01.252448   57719 cri.go:89] found id: ""
	I0410 22:52:01.252473   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.252480   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:01.252486   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:01.252531   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:01.293084   57719 cri.go:89] found id: ""
	I0410 22:52:01.293117   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.293128   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:01.293136   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:01.293208   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:01.330992   57719 cri.go:89] found id: ""
	I0410 22:52:01.331019   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.331026   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:01.331032   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:01.331081   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:01.369286   57719 cri.go:89] found id: ""
	I0410 22:52:01.369315   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.369325   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:01.369331   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:01.369378   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:01.409888   57719 cri.go:89] found id: ""
	I0410 22:52:01.409916   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.409924   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:01.409933   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:01.409944   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:01.484535   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:01.484557   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:01.484569   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:01.565727   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:01.565778   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:01.606987   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:01.607018   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:01.659492   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:01.659529   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:04.174971   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:04.190302   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:04.190382   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:04.230050   57719 cri.go:89] found id: ""
	I0410 22:52:04.230080   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.230090   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:04.230097   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:04.230162   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:04.269870   57719 cri.go:89] found id: ""
	I0410 22:52:04.269902   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.269908   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:04.269914   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:04.269969   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:04.310977   57719 cri.go:89] found id: ""
	I0410 22:52:04.311008   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.311019   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:04.311026   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:04.311096   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:04.349108   57719 cri.go:89] found id: ""
	I0410 22:52:04.349136   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.349147   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:04.349154   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:04.349216   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:04.389590   57719 cri.go:89] found id: ""
	I0410 22:52:04.389613   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.389625   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:04.389633   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:04.389697   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:04.432962   57719 cri.go:89] found id: ""
	I0410 22:52:04.432989   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.433001   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:04.433008   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:04.433070   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:04.473912   57719 cri.go:89] found id: ""
	I0410 22:52:04.473946   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.473955   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:04.473960   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:04.474029   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:04.516157   57719 cri.go:89] found id: ""
	I0410 22:52:04.516182   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.516192   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:04.516203   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:04.516218   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:04.569047   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:04.569082   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:04.622639   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:04.622673   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:04.638441   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:04.638470   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:04.718203   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:04.718227   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:04.718241   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:02.518104   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:04.519509   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:06.519648   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:04.150041   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:06.150157   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:06.902111   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:08.902816   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:07.302147   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:07.315919   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:07.315984   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:07.354692   57719 cri.go:89] found id: ""
	I0410 22:52:07.354723   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.354733   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:07.354740   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:07.354803   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:07.393418   57719 cri.go:89] found id: ""
	I0410 22:52:07.393447   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.393459   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:07.393466   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:07.393525   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:07.436810   57719 cri.go:89] found id: ""
	I0410 22:52:07.436837   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.436847   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:07.436855   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:07.436920   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:07.478685   57719 cri.go:89] found id: ""
	I0410 22:52:07.478709   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.478720   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:07.478735   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:07.478792   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:07.515699   57719 cri.go:89] found id: ""
	I0410 22:52:07.515727   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.515737   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:07.515744   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:07.515805   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:07.556419   57719 cri.go:89] found id: ""
	I0410 22:52:07.556443   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.556451   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:07.556457   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:07.556560   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:07.598076   57719 cri.go:89] found id: ""
	I0410 22:52:07.598106   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.598113   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:07.598119   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:07.598183   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:07.637778   57719 cri.go:89] found id: ""
	I0410 22:52:07.637814   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.637826   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:07.637839   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:07.637854   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:07.693688   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:07.693728   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:07.709256   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:07.709289   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:07.778519   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:07.778544   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:07.778584   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:07.858937   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:07.858973   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:10.405765   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:10.422019   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:10.422083   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:09.017771   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:11.017883   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:08.151568   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:10.650989   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:11.402181   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:13.902520   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:10.463779   57719 cri.go:89] found id: ""
	I0410 22:52:10.463818   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.463829   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:10.463836   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:10.463923   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:10.503680   57719 cri.go:89] found id: ""
	I0410 22:52:10.503710   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.503718   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:10.503736   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:10.503804   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:10.545567   57719 cri.go:89] found id: ""
	I0410 22:52:10.545594   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.545605   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:10.545613   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:10.545671   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:10.590864   57719 cri.go:89] found id: ""
	I0410 22:52:10.590892   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.590901   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:10.590908   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:10.590968   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:10.634628   57719 cri.go:89] found id: ""
	I0410 22:52:10.634659   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.634670   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:10.634677   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:10.634758   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:10.681477   57719 cri.go:89] found id: ""
	I0410 22:52:10.681507   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.681526   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:10.681533   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:10.681585   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:10.725203   57719 cri.go:89] found id: ""
	I0410 22:52:10.725229   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.725328   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:10.725368   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:10.725443   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:10.764994   57719 cri.go:89] found id: ""
	I0410 22:52:10.765028   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.765036   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:10.765044   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:10.765094   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:10.808981   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:10.809012   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:10.866429   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:10.866468   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:10.882512   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:10.882537   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:10.963016   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:10.963041   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:10.963053   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:13.544552   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:13.558161   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:13.558238   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:13.596945   57719 cri.go:89] found id: ""
	I0410 22:52:13.596977   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.596988   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:13.596996   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:13.597057   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:13.637920   57719 cri.go:89] found id: ""
	I0410 22:52:13.637944   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.637951   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:13.637958   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:13.638012   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:13.676777   57719 cri.go:89] found id: ""
	I0410 22:52:13.676808   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.676819   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:13.676826   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:13.676887   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:13.714054   57719 cri.go:89] found id: ""
	I0410 22:52:13.714078   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.714086   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:13.714091   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:13.714142   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:13.757162   57719 cri.go:89] found id: ""
	I0410 22:52:13.757194   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.757206   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:13.757214   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:13.757276   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:13.793578   57719 cri.go:89] found id: ""
	I0410 22:52:13.793616   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.793629   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:13.793636   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:13.793697   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:13.831307   57719 cri.go:89] found id: ""
	I0410 22:52:13.831336   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.831346   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:13.831353   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:13.831400   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:13.872072   57719 cri.go:89] found id: ""
	I0410 22:52:13.872109   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.872117   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:13.872127   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:13.872143   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:13.926909   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:13.926947   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:13.943095   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:13.943126   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:14.015301   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:14.015336   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:14.015351   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:14.101100   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:14.101137   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:13.019599   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:15.517932   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:13.150248   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:15.650269   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:16.401396   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:18.402384   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:16.650213   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:16.664603   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:16.664677   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:16.701498   57719 cri.go:89] found id: ""
	I0410 22:52:16.701527   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.701539   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:16.701547   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:16.701618   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:16.740687   57719 cri.go:89] found id: ""
	I0410 22:52:16.740716   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.740725   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:16.740730   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:16.740789   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:16.777349   57719 cri.go:89] found id: ""
	I0410 22:52:16.777372   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.777380   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:16.777385   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:16.777454   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:16.819855   57719 cri.go:89] found id: ""
	I0410 22:52:16.819890   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.819900   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:16.819909   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:16.819973   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:16.859939   57719 cri.go:89] found id: ""
	I0410 22:52:16.859970   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.859981   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:16.859991   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:16.860056   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:16.897861   57719 cri.go:89] found id: ""
	I0410 22:52:16.897886   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.897893   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:16.897899   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:16.897962   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:16.935642   57719 cri.go:89] found id: ""
	I0410 22:52:16.935673   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.935681   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:16.935687   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:16.935733   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:16.974268   57719 cri.go:89] found id: ""
	I0410 22:52:16.974294   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.974302   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:16.974311   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:16.974327   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:17.027850   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:17.027888   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:17.043343   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:17.043379   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:17.120945   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:17.120967   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:17.120979   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:17.204831   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:17.204868   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:19.749712   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:19.764102   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:19.764181   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:19.800759   57719 cri.go:89] found id: ""
	I0410 22:52:19.800787   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.800795   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:19.800801   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:19.800851   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:19.839678   57719 cri.go:89] found id: ""
	I0410 22:52:19.839711   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.839723   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:19.839730   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:19.839791   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:19.876983   57719 cri.go:89] found id: ""
	I0410 22:52:19.877007   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.877015   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:19.877020   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:19.877081   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:19.918139   57719 cri.go:89] found id: ""
	I0410 22:52:19.918167   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.918177   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:19.918186   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:19.918243   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:19.954770   57719 cri.go:89] found id: ""
	I0410 22:52:19.954808   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.954818   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:19.954825   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:19.954881   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:19.993643   57719 cri.go:89] found id: ""
	I0410 22:52:19.993670   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.993680   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:19.993687   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:19.993746   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:20.030466   57719 cri.go:89] found id: ""
	I0410 22:52:20.030494   57719 logs.go:276] 0 containers: []
	W0410 22:52:20.030503   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:20.030510   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:20.030575   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:20.069264   57719 cri.go:89] found id: ""
	I0410 22:52:20.069291   57719 logs.go:276] 0 containers: []
	W0410 22:52:20.069299   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:20.069307   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:20.069318   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:20.117354   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:20.117382   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:20.170758   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:20.170800   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:20.187014   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:20.187055   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:20.269620   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:20.269645   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:20.269661   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:17.518440   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:20.018602   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:18.151102   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:20.151664   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:20.901836   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:23.401655   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:22.844841   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:22.861923   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:22.861983   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:22.907972   57719 cri.go:89] found id: ""
	I0410 22:52:22.908000   57719 logs.go:276] 0 containers: []
	W0410 22:52:22.908010   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:22.908017   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:22.908081   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:22.949822   57719 cri.go:89] found id: ""
	I0410 22:52:22.949851   57719 logs.go:276] 0 containers: []
	W0410 22:52:22.949861   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:22.949869   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:22.949935   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:22.989872   57719 cri.go:89] found id: ""
	I0410 22:52:22.989895   57719 logs.go:276] 0 containers: []
	W0410 22:52:22.989902   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:22.989908   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:22.989959   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:23.031881   57719 cri.go:89] found id: ""
	I0410 22:52:23.031900   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.031908   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:23.031913   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:23.031978   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:23.071691   57719 cri.go:89] found id: ""
	I0410 22:52:23.071719   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.071726   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:23.071732   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:23.071792   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:23.109961   57719 cri.go:89] found id: ""
	I0410 22:52:23.109990   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.110001   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:23.110009   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:23.110069   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:23.152955   57719 cri.go:89] found id: ""
	I0410 22:52:23.152979   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.152986   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:23.152991   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:23.153054   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:23.191883   57719 cri.go:89] found id: ""
	I0410 22:52:23.191924   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.191935   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:23.191947   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:23.191959   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:23.232692   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:23.232731   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:23.283648   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:23.283684   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:23.297701   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:23.297729   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:23.381657   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:23.381673   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:23.381685   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:22.520899   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:25.016955   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:27.018541   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:22.650053   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:25.150370   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:25.402084   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:27.402670   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:25.961531   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:25.977539   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:25.977639   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:26.021844   57719 cri.go:89] found id: ""
	I0410 22:52:26.021875   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.021886   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:26.021893   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:26.021954   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:26.064286   57719 cri.go:89] found id: ""
	I0410 22:52:26.064316   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.064327   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:26.064335   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:26.064394   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:26.104381   57719 cri.go:89] found id: ""
	I0410 22:52:26.104426   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.104437   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:26.104445   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:26.104522   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:26.143382   57719 cri.go:89] found id: ""
	I0410 22:52:26.143407   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.143417   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:26.143424   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:26.143489   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:26.179609   57719 cri.go:89] found id: ""
	I0410 22:52:26.179635   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.179646   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:26.179652   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:26.179714   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:26.217660   57719 cri.go:89] found id: ""
	I0410 22:52:26.217689   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.217695   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:26.217701   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:26.217758   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:26.254914   57719 cri.go:89] found id: ""
	I0410 22:52:26.254946   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.254956   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:26.254963   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:26.255047   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:26.293738   57719 cri.go:89] found id: ""
	I0410 22:52:26.293769   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.293779   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:26.293790   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:26.293809   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:26.366700   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:26.366725   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:26.366741   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:26.445143   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:26.445183   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:26.493175   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:26.493203   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:26.554952   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:26.554992   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:29.072225   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:29.087075   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:29.087150   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:29.131314   57719 cri.go:89] found id: ""
	I0410 22:52:29.131345   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.131357   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:29.131365   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:29.131427   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:29.169263   57719 cri.go:89] found id: ""
	I0410 22:52:29.169289   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.169298   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:29.169304   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:29.169357   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:29.209535   57719 cri.go:89] found id: ""
	I0410 22:52:29.209559   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.209570   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:29.209575   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:29.209630   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:29.251172   57719 cri.go:89] found id: ""
	I0410 22:52:29.251225   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.251233   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:29.251238   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:29.251290   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:29.296142   57719 cri.go:89] found id: ""
	I0410 22:52:29.296169   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.296179   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:29.296185   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:29.296245   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:29.336910   57719 cri.go:89] found id: ""
	I0410 22:52:29.336933   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.336940   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:29.336946   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:29.337003   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:29.396332   57719 cri.go:89] found id: ""
	I0410 22:52:29.396371   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.396382   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:29.396390   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:29.396475   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:29.438301   57719 cri.go:89] found id: ""
	I0410 22:52:29.438332   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.438340   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:29.438348   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:29.438360   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:29.482687   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:29.482711   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:29.535115   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:29.535146   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:29.551736   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:29.551760   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:29.624162   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:29.624198   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:29.624213   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:29.517873   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:31.519737   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:27.650947   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:29.651296   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:32.150101   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:29.901370   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:31.902050   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:34.401849   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:32.204355   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:32.218239   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:32.218310   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:32.255412   57719 cri.go:89] found id: ""
	I0410 22:52:32.255440   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.255451   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:32.255458   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:32.255516   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:32.293553   57719 cri.go:89] found id: ""
	I0410 22:52:32.293580   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.293591   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:32.293604   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:32.293663   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:32.332814   57719 cri.go:89] found id: ""
	I0410 22:52:32.332846   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.332855   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:32.332862   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:32.332924   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:32.371312   57719 cri.go:89] found id: ""
	I0410 22:52:32.371347   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.371368   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:32.371376   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:32.371441   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:32.407630   57719 cri.go:89] found id: ""
	I0410 22:52:32.407652   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.407659   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:32.407664   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:32.407720   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:32.444878   57719 cri.go:89] found id: ""
	I0410 22:52:32.444904   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.444914   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:32.444923   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:32.444989   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:32.490540   57719 cri.go:89] found id: ""
	I0410 22:52:32.490567   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.490578   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:32.490586   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:32.490644   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:32.528911   57719 cri.go:89] found id: ""
	I0410 22:52:32.528953   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.528961   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:32.528969   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:32.528979   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:32.608601   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:32.608626   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:32.608641   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:32.684840   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:32.684876   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:32.728092   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:32.728132   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:32.778491   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:32.778524   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:35.296228   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:35.310615   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:35.310705   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:35.377585   57719 cri.go:89] found id: ""
	I0410 22:52:35.377612   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.377623   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:35.377632   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:35.377692   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:35.417734   57719 cri.go:89] found id: ""
	I0410 22:52:35.417775   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.417796   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:35.417803   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:35.417864   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:34.017119   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:36.017526   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:34.150859   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:36.151112   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:36.402036   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:38.402201   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:35.456256   57719 cri.go:89] found id: ""
	I0410 22:52:35.456281   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.456291   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:35.456298   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:35.456382   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:35.495233   57719 cri.go:89] found id: ""
	I0410 22:52:35.495257   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.495267   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:35.495274   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:35.495333   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:35.535239   57719 cri.go:89] found id: ""
	I0410 22:52:35.535273   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.535284   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:35.535292   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:35.535352   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:35.571601   57719 cri.go:89] found id: ""
	I0410 22:52:35.571628   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.571638   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:35.571645   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:35.571708   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:35.612008   57719 cri.go:89] found id: ""
	I0410 22:52:35.612036   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.612045   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:35.612051   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:35.612099   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:35.649029   57719 cri.go:89] found id: ""
	I0410 22:52:35.649057   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.649065   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:35.649073   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:35.649084   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:35.702630   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:35.702668   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:35.718404   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:35.718433   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:35.798380   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:35.798405   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:35.798420   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:35.874049   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:35.874085   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:38.416265   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:38.430921   57719 kubeadm.go:591] duration metric: took 4m3.090666464s to restartPrimaryControlPlane
	W0410 22:52:38.431006   57719 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0410 22:52:38.431030   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0410 22:52:41.138973   57719 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.707913754s)
	I0410 22:52:41.139063   57719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:52:41.155646   57719 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:52:41.166345   57719 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:52:41.176443   57719 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:52:41.176481   57719 kubeadm.go:156] found existing configuration files:
	
	I0410 22:52:41.176547   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:52:41.186887   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:52:41.186960   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:52:41.199740   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:52:41.209843   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:52:41.209901   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:52:41.219804   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:52:41.229739   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:52:41.229807   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:52:41.240127   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:52:41.249763   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:52:41.249824   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:52:41.260148   57719 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 22:52:41.334127   57719 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0410 22:52:41.334200   57719 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 22:52:41.506104   57719 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 22:52:41.506307   57719 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 22:52:41.506488   57719 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 22:52:41.715227   57719 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 22:52:38.519180   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:41.018674   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:38.649983   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:41.152610   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:41.717460   57719 out.go:204]   - Generating certificates and keys ...
	I0410 22:52:41.717564   57719 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 22:52:41.717654   57719 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 22:52:41.717781   57719 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0410 22:52:41.717898   57719 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0410 22:52:41.718004   57719 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0410 22:52:41.718099   57719 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0410 22:52:41.718203   57719 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0410 22:52:41.718550   57719 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0410 22:52:41.719083   57719 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0410 22:52:41.719413   57719 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0410 22:52:41.719571   57719 kubeadm.go:309] [certs] Using the existing "sa" key
	I0410 22:52:41.719675   57719 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 22:52:41.998202   57719 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 22:52:42.109508   57719 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 22:52:42.315545   57719 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 22:52:42.448910   57719 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 22:52:42.465903   57719 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 22:52:42.467312   57719 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 22:52:42.467387   57719 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 22:52:42.636790   57719 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 22:52:40.402237   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:42.404435   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:42.638969   57719 out.go:204]   - Booting up control plane ...
	I0410 22:52:42.639106   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 22:52:42.652152   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 22:52:42.653843   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 22:52:42.654719   57719 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 22:52:42.658006   57719 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0410 22:52:43.518416   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:46.017894   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:43.650778   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:46.149976   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:44.902059   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:46.902549   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:49.401695   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:48.517833   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:51.018924   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:48.150825   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:50.151391   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:51.901096   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:53.902619   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:53.518616   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:55.519254   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:52.649783   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:54.651766   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:56.655687   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:55.903916   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:58.400789   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:58.017685   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:00.517303   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:59.152346   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:01.651146   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:00.901531   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:03.400690   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:02.517569   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:04.517775   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:07.017655   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:03.651728   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:05.652505   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:05.901605   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:07.902363   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:09.018576   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:11.510820   58186 pod_ready.go:81] duration metric: took 4m0.000124062s for pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace to be "Ready" ...
	E0410 22:53:11.510861   58186 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0410 22:53:11.510885   58186 pod_ready.go:38] duration metric: took 4m10.548289153s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:53:11.510918   58186 kubeadm.go:591] duration metric: took 4m18.480793797s to restartPrimaryControlPlane
	W0410 22:53:11.510993   58186 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0410 22:53:11.511019   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0410 22:53:08.151155   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:10.151358   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:10.400722   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:12.401658   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:14.401745   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:12.652391   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:14.652682   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:17.149892   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:16.900482   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:18.900789   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:19.152154   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:21.649975   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:20.902068   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:23.401500   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:22.660165   57719 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0410 22:53:22.660260   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:53:22.660520   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:53:23.653457   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:26.149469   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:25.903070   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:28.400947   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:27.660705   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:53:27.660919   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:53:28.150895   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:30.650254   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:30.401054   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:32.401994   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:32.654427   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:35.149580   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:37.150506   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:37.150533   58701 pod_ready.go:81] duration metric: took 4m0.00757056s for pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace to be "Ready" ...
	E0410 22:53:37.150544   58701 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0410 22:53:37.150552   58701 pod_ready.go:38] duration metric: took 4m5.55870495s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:53:37.150570   58701 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:53:37.150602   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:53:37.150659   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:53:37.213472   58701 cri.go:89] found id: "74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:37.213499   58701 cri.go:89] found id: ""
	I0410 22:53:37.213511   58701 logs.go:276] 1 containers: [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c]
	I0410 22:53:37.213561   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.218928   58701 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:53:37.218997   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:53:37.260045   58701 cri.go:89] found id: "34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:37.260066   58701 cri.go:89] found id: ""
	I0410 22:53:37.260073   58701 logs.go:276] 1 containers: [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072]
	I0410 22:53:37.260116   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.265329   58701 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:53:37.265393   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:53:37.306649   58701 cri.go:89] found id: "d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:37.306674   58701 cri.go:89] found id: ""
	I0410 22:53:37.306682   58701 logs.go:276] 1 containers: [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3]
	I0410 22:53:37.306729   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.311163   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:53:37.311213   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:53:37.351855   58701 cri.go:89] found id: "b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:37.351883   58701 cri.go:89] found id: ""
	I0410 22:53:37.351890   58701 logs.go:276] 1 containers: [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14]
	I0410 22:53:37.351937   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.356427   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:53:37.356497   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:53:34.900998   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:36.901173   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:39.400680   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:37.661409   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:53:37.661698   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:53:37.399224   58701 cri.go:89] found id: "7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:37.399248   58701 cri.go:89] found id: ""
	I0410 22:53:37.399257   58701 logs.go:276] 1 containers: [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b]
	I0410 22:53:37.399315   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.404314   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:53:37.404380   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:53:37.444169   58701 cri.go:89] found id: "c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:37.444196   58701 cri.go:89] found id: ""
	I0410 22:53:37.444205   58701 logs.go:276] 1 containers: [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39]
	I0410 22:53:37.444264   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.448618   58701 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:53:37.448693   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:53:37.487481   58701 cri.go:89] found id: ""
	I0410 22:53:37.487507   58701 logs.go:276] 0 containers: []
	W0410 22:53:37.487514   58701 logs.go:278] No container was found matching "kindnet"
	I0410 22:53:37.487519   58701 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0410 22:53:37.487566   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0410 22:53:37.531000   58701 cri.go:89] found id: "3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:37.531018   58701 cri.go:89] found id: "912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:37.531022   58701 cri.go:89] found id: ""
	I0410 22:53:37.531029   58701 logs.go:276] 2 containers: [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7]
	I0410 22:53:37.531081   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.535679   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.539974   58701 logs.go:123] Gathering logs for kubelet ...
	I0410 22:53:37.539998   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:53:37.601043   58701 logs.go:123] Gathering logs for dmesg ...
	I0410 22:53:37.601086   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:53:37.616427   58701 logs.go:123] Gathering logs for kube-apiserver [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c] ...
	I0410 22:53:37.616458   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:37.669951   58701 logs.go:123] Gathering logs for etcd [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072] ...
	I0410 22:53:37.669983   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:37.716243   58701 logs.go:123] Gathering logs for kube-controller-manager [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39] ...
	I0410 22:53:37.716273   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:37.774644   58701 logs.go:123] Gathering logs for storage-provisioner [912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7] ...
	I0410 22:53:37.774678   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:37.821033   58701 logs.go:123] Gathering logs for container status ...
	I0410 22:53:37.821077   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:53:37.883644   58701 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:53:37.883678   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0410 22:53:38.019289   58701 logs.go:123] Gathering logs for coredns [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3] ...
	I0410 22:53:38.019320   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:38.057708   58701 logs.go:123] Gathering logs for kube-scheduler [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14] ...
	I0410 22:53:38.057739   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:38.100119   58701 logs.go:123] Gathering logs for kube-proxy [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b] ...
	I0410 22:53:38.100149   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:38.143845   58701 logs.go:123] Gathering logs for storage-provisioner [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195] ...
	I0410 22:53:38.143875   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:38.186718   58701 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:53:38.186749   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:53:41.168951   58701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:53:41.186828   58701 api_server.go:72] duration metric: took 4m17.343179611s to wait for apiserver process to appear ...
	I0410 22:53:41.186866   58701 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:53:41.186911   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:53:41.186972   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:53:41.228167   58701 cri.go:89] found id: "74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:41.228194   58701 cri.go:89] found id: ""
	I0410 22:53:41.228201   58701 logs.go:276] 1 containers: [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c]
	I0410 22:53:41.228251   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.232754   58701 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:53:41.232812   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:53:41.271497   58701 cri.go:89] found id: "34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:41.271519   58701 cri.go:89] found id: ""
	I0410 22:53:41.271527   58701 logs.go:276] 1 containers: [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072]
	I0410 22:53:41.271575   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.276165   58701 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:53:41.276234   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:53:41.319164   58701 cri.go:89] found id: "d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:41.319187   58701 cri.go:89] found id: ""
	I0410 22:53:41.319195   58701 logs.go:276] 1 containers: [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3]
	I0410 22:53:41.319251   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.323627   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:53:41.323696   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:53:41.366648   58701 cri.go:89] found id: "b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:41.366671   58701 cri.go:89] found id: ""
	I0410 22:53:41.366678   58701 logs.go:276] 1 containers: [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14]
	I0410 22:53:41.366733   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.371132   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:53:41.371197   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:53:41.412956   58701 cri.go:89] found id: "7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:41.412974   58701 cri.go:89] found id: ""
	I0410 22:53:41.412982   58701 logs.go:276] 1 containers: [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b]
	I0410 22:53:41.413034   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.417441   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:53:41.417495   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:53:41.460008   58701 cri.go:89] found id: "c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:41.460037   58701 cri.go:89] found id: ""
	I0410 22:53:41.460048   58701 logs.go:276] 1 containers: [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39]
	I0410 22:53:41.460105   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.464422   58701 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:53:41.464492   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:53:41.504095   58701 cri.go:89] found id: ""
	I0410 22:53:41.504126   58701 logs.go:276] 0 containers: []
	W0410 22:53:41.504134   58701 logs.go:278] No container was found matching "kindnet"
	I0410 22:53:41.504140   58701 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0410 22:53:41.504199   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0410 22:53:41.543443   58701 cri.go:89] found id: "3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:41.543467   58701 cri.go:89] found id: "912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:41.543473   58701 cri.go:89] found id: ""
	I0410 22:53:41.543481   58701 logs.go:276] 2 containers: [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7]
	I0410 22:53:41.543540   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.548182   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.552917   58701 logs.go:123] Gathering logs for kube-proxy [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b] ...
	I0410 22:53:41.552941   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:41.601620   58701 logs.go:123] Gathering logs for kube-controller-manager [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39] ...
	I0410 22:53:41.601652   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:41.653090   58701 logs.go:123] Gathering logs for storage-provisioner [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195] ...
	I0410 22:53:41.653124   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:41.692683   58701 logs.go:123] Gathering logs for storage-provisioner [912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7] ...
	I0410 22:53:41.692711   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:41.736312   58701 logs.go:123] Gathering logs for dmesg ...
	I0410 22:53:41.736353   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:53:41.753242   58701 logs.go:123] Gathering logs for kube-apiserver [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c] ...
	I0410 22:53:41.753283   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:41.812881   58701 logs.go:123] Gathering logs for etcd [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072] ...
	I0410 22:53:41.812910   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:41.860686   58701 logs.go:123] Gathering logs for coredns [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3] ...
	I0410 22:53:41.860714   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:41.902523   58701 logs.go:123] Gathering logs for container status ...
	I0410 22:53:41.902546   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:53:41.945812   58701 logs.go:123] Gathering logs for kubelet ...
	I0410 22:53:41.945848   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:53:42.001012   58701 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:53:42.001046   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0410 22:53:42.123971   58701 logs.go:123] Gathering logs for kube-scheduler [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14] ...
	I0410 22:53:42.124000   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:42.168773   58701 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:53:42.168806   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:53:41.405604   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:43.901172   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:43.595677   58186 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.084634816s)
	I0410 22:53:43.595765   58186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:53:43.613470   58186 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:53:43.624876   58186 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:53:43.638564   58186 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:53:43.638592   58186 kubeadm.go:156] found existing configuration files:
	
	I0410 22:53:43.638641   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:53:43.652554   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:53:43.652608   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:53:43.664263   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:53:43.674443   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:53:43.674497   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:53:43.695444   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:53:43.705446   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:53:43.705518   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:53:43.716451   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:53:43.726343   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:53:43.726407   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:53:43.736859   58186 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 22:53:43.957994   58186 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 22:53:45.115742   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:53:45.120239   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 200:
	ok
	I0410 22:53:45.121662   58701 api_server.go:141] control plane version: v1.29.3
	I0410 22:53:45.121690   58701 api_server.go:131] duration metric: took 3.934815447s to wait for apiserver health ...
	I0410 22:53:45.121699   58701 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:53:45.121727   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:53:45.121780   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:53:45.172291   58701 cri.go:89] found id: "74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:45.172315   58701 cri.go:89] found id: ""
	I0410 22:53:45.172324   58701 logs.go:276] 1 containers: [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c]
	I0410 22:53:45.172382   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.177041   58701 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:53:45.177103   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:53:45.213853   58701 cri.go:89] found id: "34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:45.213880   58701 cri.go:89] found id: ""
	I0410 22:53:45.213889   58701 logs.go:276] 1 containers: [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072]
	I0410 22:53:45.213944   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.218478   58701 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:53:45.218546   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:53:45.268753   58701 cri.go:89] found id: "d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:45.268779   58701 cri.go:89] found id: ""
	I0410 22:53:45.268792   58701 logs.go:276] 1 containers: [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3]
	I0410 22:53:45.268843   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.273223   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:53:45.273291   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:53:45.314032   58701 cri.go:89] found id: "b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:45.314057   58701 cri.go:89] found id: ""
	I0410 22:53:45.314066   58701 logs.go:276] 1 containers: [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14]
	I0410 22:53:45.314115   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.318671   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:53:45.318740   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:53:45.356139   58701 cri.go:89] found id: "7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:45.356167   58701 cri.go:89] found id: ""
	I0410 22:53:45.356177   58701 logs.go:276] 1 containers: [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b]
	I0410 22:53:45.356234   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.361449   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:53:45.361520   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:53:45.405153   58701 cri.go:89] found id: "c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:45.405174   58701 cri.go:89] found id: ""
	I0410 22:53:45.405181   58701 logs.go:276] 1 containers: [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39]
	I0410 22:53:45.405230   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.409795   58701 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:53:45.409871   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:53:45.451984   58701 cri.go:89] found id: ""
	I0410 22:53:45.452016   58701 logs.go:276] 0 containers: []
	W0410 22:53:45.452026   58701 logs.go:278] No container was found matching "kindnet"
	I0410 22:53:45.452034   58701 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0410 22:53:45.452095   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0410 22:53:45.491612   58701 cri.go:89] found id: "3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:45.491650   58701 cri.go:89] found id: "912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:45.491656   58701 cri.go:89] found id: ""
	I0410 22:53:45.491665   58701 logs.go:276] 2 containers: [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7]
	I0410 22:53:45.491724   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.496253   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.500723   58701 logs.go:123] Gathering logs for kube-proxy [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b] ...
	I0410 22:53:45.500751   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:45.557083   58701 logs.go:123] Gathering logs for kube-controller-manager [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39] ...
	I0410 22:53:45.557118   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:45.616768   58701 logs.go:123] Gathering logs for storage-provisioner [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195] ...
	I0410 22:53:45.616804   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:45.664097   58701 logs.go:123] Gathering logs for storage-provisioner [912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7] ...
	I0410 22:53:45.664133   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:45.707920   58701 logs.go:123] Gathering logs for container status ...
	I0410 22:53:45.707957   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:53:45.751862   58701 logs.go:123] Gathering logs for kube-apiserver [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c] ...
	I0410 22:53:45.751898   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:45.806584   58701 logs.go:123] Gathering logs for coredns [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3] ...
	I0410 22:53:45.806619   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:45.846145   58701 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:53:45.846170   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0410 22:53:45.970766   58701 logs.go:123] Gathering logs for etcd [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072] ...
	I0410 22:53:45.970796   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:46.024049   58701 logs.go:123] Gathering logs for kube-scheduler [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14] ...
	I0410 22:53:46.024081   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:46.067009   58701 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:53:46.067048   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:53:46.462765   58701 logs.go:123] Gathering logs for kubelet ...
	I0410 22:53:46.462812   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:53:46.520007   58701 logs.go:123] Gathering logs for dmesg ...
	I0410 22:53:46.520049   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:53:49.047137   58701 system_pods.go:59] 8 kube-system pods found
	I0410 22:53:49.047166   58701 system_pods.go:61] "coredns-76f75df574-ghnvx" [88ebd9b0-ecf0-4037-b5b0-547dad2354ba] Running
	I0410 22:53:49.047170   58701 system_pods.go:61] "etcd-default-k8s-diff-port-519831" [e03c3075-b377-41a8-8f68-5a424fafd6a1] Running
	I0410 22:53:49.047174   58701 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-519831" [a538137b-08ce-4feb-a420-2e3ad7125b14] Running
	I0410 22:53:49.047177   58701 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-519831" [05d154e2-69cf-40dc-a4ba-ed0d65be4365] Running
	I0410 22:53:49.047180   58701 system_pods.go:61] "kube-proxy-5mbwx" [44724487-9539-4079-9fd6-40cb70208b95] Running
	I0410 22:53:49.047183   58701 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-519831" [a587ef20-140c-40b0-b306-d5f5c595f4a6] Running
	I0410 22:53:49.047189   58701 system_pods.go:61] "metrics-server-57f55c9bc5-9l2hc" [2f5cda2f-4d8f-4798-954e-5ef588f2b88f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:53:49.047192   58701 system_pods.go:61] "storage-provisioner" [e4e09f42-54ba-480e-a020-1ca071a54558] Running
	I0410 22:53:49.047201   58701 system_pods.go:74] duration metric: took 3.925495812s to wait for pod list to return data ...
	I0410 22:53:49.047208   58701 default_sa.go:34] waiting for default service account to be created ...
	I0410 22:53:49.050341   58701 default_sa.go:45] found service account: "default"
	I0410 22:53:49.050363   58701 default_sa.go:55] duration metric: took 3.148222ms for default service account to be created ...
	I0410 22:53:49.050371   58701 system_pods.go:116] waiting for k8s-apps to be running ...
	I0410 22:53:49.056364   58701 system_pods.go:86] 8 kube-system pods found
	I0410 22:53:49.056390   58701 system_pods.go:89] "coredns-76f75df574-ghnvx" [88ebd9b0-ecf0-4037-b5b0-547dad2354ba] Running
	I0410 22:53:49.056414   58701 system_pods.go:89] "etcd-default-k8s-diff-port-519831" [e03c3075-b377-41a8-8f68-5a424fafd6a1] Running
	I0410 22:53:49.056423   58701 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-519831" [a538137b-08ce-4feb-a420-2e3ad7125b14] Running
	I0410 22:53:49.056431   58701 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-519831" [05d154e2-69cf-40dc-a4ba-ed0d65be4365] Running
	I0410 22:53:49.056437   58701 system_pods.go:89] "kube-proxy-5mbwx" [44724487-9539-4079-9fd6-40cb70208b95] Running
	I0410 22:53:49.056444   58701 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-519831" [a587ef20-140c-40b0-b306-d5f5c595f4a6] Running
	I0410 22:53:49.056455   58701 system_pods.go:89] "metrics-server-57f55c9bc5-9l2hc" [2f5cda2f-4d8f-4798-954e-5ef588f2b88f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:53:49.056462   58701 system_pods.go:89] "storage-provisioner" [e4e09f42-54ba-480e-a020-1ca071a54558] Running
	I0410 22:53:49.056475   58701 system_pods.go:126] duration metric: took 6.097239ms to wait for k8s-apps to be running ...
	I0410 22:53:49.056492   58701 system_svc.go:44] waiting for kubelet service to be running ....
	I0410 22:53:49.056537   58701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:53:49.077239   58701 system_svc.go:56] duration metric: took 20.737127ms WaitForService to wait for kubelet
	I0410 22:53:49.077269   58701 kubeadm.go:576] duration metric: took 4m25.233626302s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 22:53:49.077297   58701 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:53:49.080463   58701 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:53:49.080486   58701 node_conditions.go:123] node cpu capacity is 2
	I0410 22:53:49.080497   58701 node_conditions.go:105] duration metric: took 3.195662ms to run NodePressure ...
	I0410 22:53:49.080508   58701 start.go:240] waiting for startup goroutines ...
	I0410 22:53:49.080515   58701 start.go:245] waiting for cluster config update ...
	I0410 22:53:49.080525   58701 start.go:254] writing updated cluster config ...
	I0410 22:53:49.080805   58701 ssh_runner.go:195] Run: rm -f paused
	I0410 22:53:49.141489   58701 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0410 22:53:49.143597   58701 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-519831" cluster and "default" namespace by default
	I0410 22:53:45.903632   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:48.403981   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:53.064071   58186 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0410 22:53:53.064154   58186 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 22:53:53.064260   58186 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 22:53:53.064429   58186 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 22:53:53.064574   58186 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 22:53:53.064670   58186 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 22:53:53.066595   58186 out.go:204]   - Generating certificates and keys ...
	I0410 22:53:53.066703   58186 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 22:53:53.066808   58186 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 22:53:53.066929   58186 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0410 22:53:53.067023   58186 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0410 22:53:53.067155   58186 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0410 22:53:53.067235   58186 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0410 22:53:53.067329   58186 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0410 22:53:53.067433   58186 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0410 22:53:53.067546   58186 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0410 22:53:53.067655   58186 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0410 22:53:53.067733   58186 kubeadm.go:309] [certs] Using the existing "sa" key
	I0410 22:53:53.067890   58186 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 22:53:53.067961   58186 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 22:53:53.068049   58186 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0410 22:53:53.068132   58186 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 22:53:53.068232   58186 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 22:53:53.068310   58186 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 22:53:53.068379   58186 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 22:53:53.068510   58186 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 22:53:53.070126   58186 out.go:204]   - Booting up control plane ...
	I0410 22:53:53.070219   58186 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 22:53:53.070324   58186 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 22:53:53.070425   58186 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 22:53:53.070565   58186 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 22:53:53.070686   58186 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 22:53:53.070748   58186 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 22:53:53.070973   58186 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0410 22:53:53.071083   58186 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002820 seconds
	I0410 22:53:53.071249   58186 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0410 22:53:53.071424   58186 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0410 22:53:53.071485   58186 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0410 22:53:53.071624   58186 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-706500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0410 22:53:53.071680   58186 kubeadm.go:309] [bootstrap-token] Using token: 0wvld6.jntz9ft9bn5g46le
	I0410 22:53:53.073567   58186 out.go:204]   - Configuring RBAC rules ...
	I0410 22:53:53.073708   58186 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0410 22:53:53.073819   58186 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0410 22:53:53.074015   58186 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0410 22:53:53.074206   58186 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0410 22:53:53.074370   58186 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0410 22:53:53.074548   58186 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0410 22:53:53.074726   58186 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0410 22:53:53.074798   58186 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0410 22:53:53.074873   58186 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0410 22:53:53.074884   58186 kubeadm.go:309] 
	I0410 22:53:53.074956   58186 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0410 22:53:53.074978   58186 kubeadm.go:309] 
	I0410 22:53:53.075077   58186 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0410 22:53:53.075088   58186 kubeadm.go:309] 
	I0410 22:53:53.075119   58186 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0410 22:53:53.075191   58186 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0410 22:53:53.075262   58186 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0410 22:53:53.075273   58186 kubeadm.go:309] 
	I0410 22:53:53.075337   58186 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0410 22:53:53.075353   58186 kubeadm.go:309] 
	I0410 22:53:53.075419   58186 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0410 22:53:53.075437   58186 kubeadm.go:309] 
	I0410 22:53:53.075503   58186 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0410 22:53:53.075621   58186 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0410 22:53:53.075714   58186 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0410 22:53:53.075724   58186 kubeadm.go:309] 
	I0410 22:53:53.075829   58186 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0410 22:53:53.075936   58186 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0410 22:53:53.075953   58186 kubeadm.go:309] 
	I0410 22:53:53.076058   58186 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 0wvld6.jntz9ft9bn5g46le \
	I0410 22:53:53.076196   58186 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f23510b5ac28f8f42919becc6f4945068a05a3fcf79470ca514b0af3de7a2bb0 \
	I0410 22:53:53.076253   58186 kubeadm.go:309] 	--control-plane 
	I0410 22:53:53.076270   58186 kubeadm.go:309] 
	I0410 22:53:53.076387   58186 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0410 22:53:53.076422   58186 kubeadm.go:309] 
	I0410 22:53:53.076516   58186 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 0wvld6.jntz9ft9bn5g46le \
	I0410 22:53:53.076661   58186 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f23510b5ac28f8f42919becc6f4945068a05a3fcf79470ca514b0af3de7a2bb0 
	I0410 22:53:53.076711   58186 cni.go:84] Creating CNI manager for ""
	I0410 22:53:53.076726   58186 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:53:53.078503   58186 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0410 22:53:50.902397   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:53.403449   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:53.079631   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0410 22:53:53.132043   58186 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0410 22:53:53.167760   58186 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0410 22:53:53.167847   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:53.167870   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-706500 minikube.k8s.io/updated_at=2024_04_10T22_53_53_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2 minikube.k8s.io/name=embed-certs-706500 minikube.k8s.io/primary=true
	I0410 22:53:53.511359   58186 ops.go:34] apiserver oom_adj: -16
	I0410 22:53:53.511506   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:54.012080   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:54.511816   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:55.011883   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:55.511809   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:56.011572   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:56.512114   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:57.011878   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:55.900548   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:57.901541   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:57.662444   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:53:57.662687   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:53:57.511726   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:58.011563   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:58.512617   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:59.012145   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:59.512448   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:00.012278   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:00.512290   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:01.012507   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:01.512415   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:02.011660   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:00.401622   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:54:02.902558   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:54:02.511581   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:03.012326   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:03.512539   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:04.012085   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:04.512496   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:05.011911   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:05.512180   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:05.619801   58186 kubeadm.go:1107] duration metric: took 12.452015223s to wait for elevateKubeSystemPrivileges
	W0410 22:54:05.619839   58186 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0410 22:54:05.619847   58186 kubeadm.go:393] duration metric: took 5m12.640298551s to StartCluster
	I0410 22:54:05.619862   58186 settings.go:142] acquiring lock: {Name:mk5dc8e9a07a91433645b19ffba859d70c73be71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:54:05.619936   58186 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:54:05.621989   58186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/kubeconfig: {Name:mkd6f498bec10d7b0d2291f83f5e27766227bc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:54:05.622331   58186 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0410 22:54:05.624233   58186 out.go:177] * Verifying Kubernetes components...
	I0410 22:54:05.622444   58186 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0410 22:54:05.622516   58186 config.go:182] Loaded profile config "embed-certs-706500": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:54:05.625850   58186 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-706500"
	I0410 22:54:05.625872   58186 addons.go:69] Setting default-storageclass=true in profile "embed-certs-706500"
	I0410 22:54:05.625882   58186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:54:05.625893   58186 addons.go:69] Setting metrics-server=true in profile "embed-certs-706500"
	I0410 22:54:05.625924   58186 addons.go:234] Setting addon metrics-server=true in "embed-certs-706500"
	W0410 22:54:05.625930   58186 addons.go:243] addon metrics-server should already be in state true
	I0410 22:54:05.625954   58186 host.go:66] Checking if "embed-certs-706500" exists ...
	I0410 22:54:05.625888   58186 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-706500"
	I0410 22:54:05.625903   58186 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-706500"
	W0410 22:54:05.625982   58186 addons.go:243] addon storage-provisioner should already be in state true
	I0410 22:54:05.626012   58186 host.go:66] Checking if "embed-certs-706500" exists ...
	I0410 22:54:05.626365   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.626407   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.626421   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.626440   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.626441   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.626442   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.643647   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44787
	I0410 22:54:05.643758   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41863
	I0410 22:54:05.644070   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45225
	I0410 22:54:05.644101   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.644253   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.644825   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.644856   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.644825   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.644883   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.644915   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.645239   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.645419   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetState
	I0410 22:54:05.645475   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.645489   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.645501   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.646021   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.646035   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.646062   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.646588   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.646619   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.648242   58186 addons.go:234] Setting addon default-storageclass=true in "embed-certs-706500"
	W0410 22:54:05.648261   58186 addons.go:243] addon default-storageclass should already be in state true
	I0410 22:54:05.648282   58186 host.go:66] Checking if "embed-certs-706500" exists ...
	I0410 22:54:05.648555   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.648582   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.661773   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37117
	I0410 22:54:05.662556   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.663049   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.663073   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.663474   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.663708   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetState
	I0410 22:54:05.664716   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44297
	I0410 22:54:05.665027   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.665617   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.665634   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.665706   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46269
	I0410 22:54:05.666342   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.666343   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:54:05.665946   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.668790   58186 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:54:05.667015   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.667244   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.670336   58186 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:54:05.670357   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0410 22:54:05.670374   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:54:05.668826   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.668843   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.671350   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.671633   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetState
	I0410 22:54:05.673653   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:54:05.675310   58186 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0410 22:54:05.674011   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.674533   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:54:05.676671   58186 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0410 22:54:05.676677   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:54:05.676690   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0410 22:54:05.676710   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:54:05.676713   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.676821   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:54:05.676976   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:54:05.677117   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:54:05.680146   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.680927   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:54:05.680964   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.681136   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:54:05.681515   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:54:05.681681   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:54:05.681834   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:54:05.688424   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43155
	I0410 22:54:05.688861   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.689299   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.689320   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.689589   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.689741   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetState
	I0410 22:54:05.691090   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:54:05.691335   58186 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0410 22:54:05.691353   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0410 22:54:05.691369   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:54:05.694552   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.695080   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:54:05.695118   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.695426   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:54:05.695771   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:54:05.695939   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:54:05.696084   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:54:05.860032   58186 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:54:05.881036   58186 node_ready.go:35] waiting up to 6m0s for node "embed-certs-706500" to be "Ready" ...
	I0410 22:54:05.891218   58186 node_ready.go:49] node "embed-certs-706500" has status "Ready":"True"
	I0410 22:54:05.891237   58186 node_ready.go:38] duration metric: took 10.166143ms for node "embed-certs-706500" to be "Ready" ...
	I0410 22:54:05.891247   58186 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:54:05.899013   58186 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-bvdp5" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:06.064031   58186 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0410 22:54:06.064051   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0410 22:54:06.065727   58186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:54:06.075127   58186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0410 22:54:06.140574   58186 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0410 22:54:06.140607   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0410 22:54:06.216389   58186 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:54:06.216428   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0410 22:54:06.356117   58186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:54:07.409983   58186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.334826611s)
	I0410 22:54:07.410039   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.410052   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.410103   58186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.344342448s)
	I0410 22:54:07.410184   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.410199   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.410313   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Closing plugin on server side
	I0410 22:54:07.410321   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.410362   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.410371   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.410382   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.410452   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.410505   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.410519   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.410531   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.410465   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Closing plugin on server side
	I0410 22:54:07.410678   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.410765   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.410802   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.410820   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.410822   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Closing plugin on server side
	I0410 22:54:07.438723   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.438742   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.439085   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.439104   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.439085   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Closing plugin on server side
	I0410 22:54:07.738187   58186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.382031326s)
	I0410 22:54:07.738252   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.738267   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.738556   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.738586   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.738597   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.738604   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.738865   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.738885   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.738908   58186 addons.go:470] Verifying addon metrics-server=true in "embed-certs-706500"
	I0410 22:54:07.741639   58186 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0410 22:54:05.403374   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:54:07.903041   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:54:08.895154   57270 pod_ready.go:81] duration metric: took 4m0.000708165s for pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace to be "Ready" ...
	E0410 22:54:08.895186   57270 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace to be "Ready" (will not retry!)
	I0410 22:54:08.895214   57270 pod_ready.go:38] duration metric: took 4m14.550044852s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:54:08.895246   57270 kubeadm.go:591] duration metric: took 4m22.444968141s to restartPrimaryControlPlane
	W0410 22:54:08.895308   57270 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0410 22:54:08.895339   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0410 22:54:07.742954   58186 addons.go:505] duration metric: took 2.120520274s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0410 22:54:07.910203   58186 pod_ready.go:102] pod "coredns-76f75df574-bvdp5" in "kube-system" namespace has status "Ready":"False"
	I0410 22:54:08.906369   58186 pod_ready.go:92] pod "coredns-76f75df574-bvdp5" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:08.906394   58186 pod_ready.go:81] duration metric: took 3.007348288s for pod "coredns-76f75df574-bvdp5" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.906407   58186 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-v2pp5" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.913564   58186 pod_ready.go:92] pod "coredns-76f75df574-v2pp5" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:08.913582   58186 pod_ready.go:81] duration metric: took 7.168463ms for pod "coredns-76f75df574-v2pp5" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.913592   58186 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.919270   58186 pod_ready.go:92] pod "etcd-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:08.919296   58186 pod_ready.go:81] duration metric: took 5.696297ms for pod "etcd-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.919308   58186 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.924389   58186 pod_ready.go:92] pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:08.924430   58186 pod_ready.go:81] duration metric: took 5.111624ms for pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.924443   58186 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.929296   58186 pod_ready.go:92] pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:08.929320   58186 pod_ready.go:81] duration metric: took 4.869073ms for pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.929333   58186 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xj5nq" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:09.305730   58186 pod_ready.go:92] pod "kube-proxy-xj5nq" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:09.305756   58186 pod_ready.go:81] duration metric: took 376.415901ms for pod "kube-proxy-xj5nq" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:09.305770   58186 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:09.703841   58186 pod_ready.go:92] pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:09.703869   58186 pod_ready.go:81] duration metric: took 398.090582ms for pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:09.703881   58186 pod_ready.go:38] duration metric: took 3.812625835s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:54:09.703898   58186 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:54:09.703957   58186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:54:09.720728   58186 api_server.go:72] duration metric: took 4.098354983s to wait for apiserver process to appear ...
	I0410 22:54:09.720763   58186 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:54:09.720786   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:54:09.726522   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I0410 22:54:09.727951   58186 api_server.go:141] control plane version: v1.29.3
	I0410 22:54:09.727979   58186 api_server.go:131] duration metric: took 7.20731ms to wait for apiserver health ...
	I0410 22:54:09.727989   58186 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:54:09.908166   58186 system_pods.go:59] 9 kube-system pods found
	I0410 22:54:09.908203   58186 system_pods.go:61] "coredns-76f75df574-bvdp5" [1cc8a326-77ef-469f-abf7-082ff8a44782] Running
	I0410 22:54:09.908212   58186 system_pods.go:61] "coredns-76f75df574-v2pp5" [2138fb5e-9c16-4a25-85d3-3d84b361a1e8] Running
	I0410 22:54:09.908217   58186 system_pods.go:61] "etcd-embed-certs-706500" [4a4b25f6-f8b7-49a2-9dfb-74d480775de7] Running
	I0410 22:54:09.908222   58186 system_pods.go:61] "kube-apiserver-embed-certs-706500" [33bf3126-e3fa-49f8-829d-8fb5ab407062] Running
	I0410 22:54:09.908227   58186 system_pods.go:61] "kube-controller-manager-embed-certs-706500" [97ca8487-eb31-43f8-ab20-873a134bdcad] Running
	I0410 22:54:09.908232   58186 system_pods.go:61] "kube-proxy-xj5nq" [c1bb1878-3e4b-4647-a3a7-cb327ccbd364] Running
	I0410 22:54:09.908236   58186 system_pods.go:61] "kube-scheduler-embed-certs-706500" [977f178e-11a1-46a9-87a1-04a5a915c267] Running
	I0410 22:54:09.908246   58186 system_pods.go:61] "metrics-server-57f55c9bc5-9mrmz" [a4ccd29a-d27e-4291-ac8c-3135d65f8a2a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:54:09.908251   58186 system_pods.go:61] "storage-provisioner" [8ad8e533-69ca-4eb5-9595-e6808dc0ff1a] Running
	I0410 22:54:09.908263   58186 system_pods.go:74] duration metric: took 180.267138ms to wait for pod list to return data ...
	I0410 22:54:09.908276   58186 default_sa.go:34] waiting for default service account to be created ...
	I0410 22:54:10.103556   58186 default_sa.go:45] found service account: "default"
	I0410 22:54:10.103586   58186 default_sa.go:55] duration metric: took 195.301798ms for default service account to be created ...
	I0410 22:54:10.103597   58186 system_pods.go:116] waiting for k8s-apps to be running ...
	I0410 22:54:10.309537   58186 system_pods.go:86] 9 kube-system pods found
	I0410 22:54:10.309566   58186 system_pods.go:89] "coredns-76f75df574-bvdp5" [1cc8a326-77ef-469f-abf7-082ff8a44782] Running
	I0410 22:54:10.309572   58186 system_pods.go:89] "coredns-76f75df574-v2pp5" [2138fb5e-9c16-4a25-85d3-3d84b361a1e8] Running
	I0410 22:54:10.309578   58186 system_pods.go:89] "etcd-embed-certs-706500" [4a4b25f6-f8b7-49a2-9dfb-74d480775de7] Running
	I0410 22:54:10.309583   58186 system_pods.go:89] "kube-apiserver-embed-certs-706500" [33bf3126-e3fa-49f8-829d-8fb5ab407062] Running
	I0410 22:54:10.309588   58186 system_pods.go:89] "kube-controller-manager-embed-certs-706500" [97ca8487-eb31-43f8-ab20-873a134bdcad] Running
	I0410 22:54:10.309592   58186 system_pods.go:89] "kube-proxy-xj5nq" [c1bb1878-3e4b-4647-a3a7-cb327ccbd364] Running
	I0410 22:54:10.309596   58186 system_pods.go:89] "kube-scheduler-embed-certs-706500" [977f178e-11a1-46a9-87a1-04a5a915c267] Running
	I0410 22:54:10.309602   58186 system_pods.go:89] "metrics-server-57f55c9bc5-9mrmz" [a4ccd29a-d27e-4291-ac8c-3135d65f8a2a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:54:10.309607   58186 system_pods.go:89] "storage-provisioner" [8ad8e533-69ca-4eb5-9595-e6808dc0ff1a] Running
	I0410 22:54:10.309617   58186 system_pods.go:126] duration metric: took 206.014442ms to wait for k8s-apps to be running ...
	I0410 22:54:10.309624   58186 system_svc.go:44] waiting for kubelet service to be running ....
	I0410 22:54:10.309666   58186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:54:10.324614   58186 system_svc.go:56] duration metric: took 14.97975ms WaitForService to wait for kubelet
	I0410 22:54:10.324651   58186 kubeadm.go:576] duration metric: took 4.702277594s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 22:54:10.324669   58186 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:54:10.503911   58186 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:54:10.503939   58186 node_conditions.go:123] node cpu capacity is 2
	I0410 22:54:10.503949   58186 node_conditions.go:105] duration metric: took 179.27538ms to run NodePressure ...
	I0410 22:54:10.503959   58186 start.go:240] waiting for startup goroutines ...
	I0410 22:54:10.503966   58186 start.go:245] waiting for cluster config update ...
	I0410 22:54:10.503975   58186 start.go:254] writing updated cluster config ...
	I0410 22:54:10.504242   58186 ssh_runner.go:195] Run: rm -f paused
	I0410 22:54:10.555500   58186 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0410 22:54:10.557941   58186 out.go:177] * Done! kubectl is now configured to use "embed-certs-706500" cluster and "default" namespace by default
	I0410 22:54:37.664290   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:54:37.664604   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:54:37.664634   57719 kubeadm.go:309] 
	I0410 22:54:37.664776   57719 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0410 22:54:37.664843   57719 kubeadm.go:309] 		timed out waiting for the condition
	I0410 22:54:37.664854   57719 kubeadm.go:309] 
	I0410 22:54:37.664901   57719 kubeadm.go:309] 	This error is likely caused by:
	I0410 22:54:37.664968   57719 kubeadm.go:309] 		- The kubelet is not running
	I0410 22:54:37.665086   57719 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0410 22:54:37.665101   57719 kubeadm.go:309] 
	I0410 22:54:37.665245   57719 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0410 22:54:37.665313   57719 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0410 22:54:37.665360   57719 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0410 22:54:37.665372   57719 kubeadm.go:309] 
	I0410 22:54:37.665579   57719 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0410 22:54:37.665695   57719 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0410 22:54:37.665707   57719 kubeadm.go:309] 
	I0410 22:54:37.665868   57719 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0410 22:54:37.666063   57719 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0410 22:54:37.666192   57719 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0410 22:54:37.666272   57719 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0410 22:54:37.666284   57719 kubeadm.go:309] 
	I0410 22:54:37.667202   57719 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 22:54:37.667329   57719 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0410 22:54:37.667420   57719 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0410 22:54:37.667555   57719 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0410 22:54:37.667623   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0410 22:54:40.975782   57270 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.080419546s)
	I0410 22:54:40.975854   57270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:54:40.993677   57270 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:54:41.006185   57270 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:54:41.016820   57270 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:54:41.016850   57270 kubeadm.go:156] found existing configuration files:
	
	I0410 22:54:41.016985   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:54:41.026802   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:54:41.026871   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:54:41.036992   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:54:41.046896   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:54:41.046962   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:54:41.057184   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:54:41.067261   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:54:41.067321   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:54:41.077846   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:54:41.087745   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:54:41.087795   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:54:41.098660   57270 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 22:54:41.159736   57270 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-rc.1
	I0410 22:54:41.159807   57270 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 22:54:41.316137   57270 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 22:54:41.316279   57270 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 22:54:41.316446   57270 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 22:54:41.559720   57270 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 22:54:41.561946   57270 out.go:204]   - Generating certificates and keys ...
	I0410 22:54:41.562039   57270 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 22:54:41.562141   57270 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 22:54:41.562211   57270 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0410 22:54:41.562275   57270 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0410 22:54:41.562352   57270 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0410 22:54:41.562460   57270 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0410 22:54:41.562572   57270 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0410 22:54:41.562667   57270 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0410 22:54:41.562803   57270 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0410 22:54:41.562917   57270 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0410 22:54:41.562992   57270 kubeadm.go:309] [certs] Using the existing "sa" key
	I0410 22:54:41.563081   57270 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 22:54:41.723729   57270 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 22:54:41.834274   57270 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0410 22:54:41.936758   57270 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 22:54:42.038298   57270 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 22:54:42.229459   57270 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 22:54:42.230047   57270 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 22:54:42.233021   57270 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 22:54:42.236068   57270 out.go:204]   - Booting up control plane ...
	I0410 22:54:42.236197   57270 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 22:54:42.236303   57270 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 22:54:42.236421   57270 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 22:54:42.255487   57270 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 22:54:42.256345   57270 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 22:54:42.256450   57270 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 22:54:42.391623   57270 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0410 22:54:42.391736   57270 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0410 22:54:43.393825   57270 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.00265832s
	I0410 22:54:43.393973   57270 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0410 22:54:43.156141   57719 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.488487447s)
	I0410 22:54:43.156227   57719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:54:43.170709   57719 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:54:43.180624   57719 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:54:43.180647   57719 kubeadm.go:156] found existing configuration files:
	
	I0410 22:54:43.180701   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:54:43.190482   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:54:43.190533   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:54:43.200261   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:54:43.210061   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:54:43.210116   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:54:43.220430   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:54:43.230810   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:54:43.230877   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:54:43.241141   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:54:43.251043   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:54:43.251111   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:54:43.261163   57719 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 22:54:43.534002   57719 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 22:54:48.398196   57270 kubeadm.go:309] [api-check] The API server is healthy after 5.002218646s
	I0410 22:54:48.410618   57270 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0410 22:54:48.430553   57270 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0410 22:54:48.465343   57270 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0410 22:54:48.465614   57270 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-646133 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0410 22:54:48.489066   57270 kubeadm.go:309] [bootstrap-token] Using token: 14xwwp.uyth37qsjfn0mpcx
	I0410 22:54:48.490984   57270 out.go:204]   - Configuring RBAC rules ...
	I0410 22:54:48.491116   57270 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0410 22:54:48.502789   57270 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0410 22:54:48.516871   57270 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0410 22:54:48.523600   57270 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0410 22:54:48.527939   57270 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0410 22:54:48.537216   57270 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0410 22:54:48.806350   57270 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0410 22:54:49.234618   57270 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0410 22:54:49.803640   57270 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0410 22:54:49.804948   57270 kubeadm.go:309] 
	I0410 22:54:49.805074   57270 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0410 22:54:49.805095   57270 kubeadm.go:309] 
	I0410 22:54:49.805194   57270 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0410 22:54:49.805209   57270 kubeadm.go:309] 
	I0410 22:54:49.805240   57270 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0410 22:54:49.805323   57270 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0410 22:54:49.805403   57270 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0410 22:54:49.805415   57270 kubeadm.go:309] 
	I0410 22:54:49.805482   57270 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0410 22:54:49.805489   57270 kubeadm.go:309] 
	I0410 22:54:49.805562   57270 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0410 22:54:49.805580   57270 kubeadm.go:309] 
	I0410 22:54:49.805646   57270 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0410 22:54:49.805781   57270 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0410 22:54:49.805888   57270 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0410 22:54:49.805901   57270 kubeadm.go:309] 
	I0410 22:54:49.806038   57270 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0410 22:54:49.806143   57270 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0410 22:54:49.806154   57270 kubeadm.go:309] 
	I0410 22:54:49.806262   57270 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 14xwwp.uyth37qsjfn0mpcx \
	I0410 22:54:49.806398   57270 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f23510b5ac28f8f42919becc6f4945068a05a3fcf79470ca514b0af3de7a2bb0 \
	I0410 22:54:49.806438   57270 kubeadm.go:309] 	--control-plane 
	I0410 22:54:49.806456   57270 kubeadm.go:309] 
	I0410 22:54:49.806565   57270 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0410 22:54:49.806581   57270 kubeadm.go:309] 
	I0410 22:54:49.806661   57270 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 14xwwp.uyth37qsjfn0mpcx \
	I0410 22:54:49.806777   57270 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f23510b5ac28f8f42919becc6f4945068a05a3fcf79470ca514b0af3de7a2bb0 
	I0410 22:54:49.808385   57270 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 22:54:49.808455   57270 cni.go:84] Creating CNI manager for ""
	I0410 22:54:49.808473   57270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:54:49.811276   57270 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0410 22:54:49.812840   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0410 22:54:49.829865   57270 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0410 22:54:49.854383   57270 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0410 22:54:49.854454   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:49.854456   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-646133 minikube.k8s.io/updated_at=2024_04_10T22_54_49_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2 minikube.k8s.io/name=no-preload-646133 minikube.k8s.io/primary=true
	I0410 22:54:49.888254   57270 ops.go:34] apiserver oom_adj: -16
	I0410 22:54:50.073922   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:50.574248   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:51.074134   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:51.574654   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:52.074970   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:52.574248   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:53.074799   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:53.574902   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:54.074695   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:54.574038   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:55.074975   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:55.574297   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:56.074490   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:56.574490   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:57.074280   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:57.574569   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:58.074654   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:58.574740   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:59.074630   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:59.574546   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:00.075044   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:00.574740   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:01.074961   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:01.574004   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:02.074121   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:02.574476   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:02.705604   57270 kubeadm.go:1107] duration metric: took 12.851213125s to wait for elevateKubeSystemPrivileges
	W0410 22:55:02.705636   57270 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0410 22:55:02.705644   57270 kubeadm.go:393] duration metric: took 5m16.306442396s to StartCluster
	I0410 22:55:02.705660   57270 settings.go:142] acquiring lock: {Name:mk5dc8e9a07a91433645b19ffba859d70c73be71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:55:02.705739   57270 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:55:02.707592   57270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/kubeconfig: {Name:mkd6f498bec10d7b0d2291f83f5e27766227bc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:55:02.707844   57270 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.17 Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0410 22:55:02.709479   57270 out.go:177] * Verifying Kubernetes components...
	I0410 22:55:02.707944   57270 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0410 22:55:02.708074   57270 config.go:182] Loaded profile config "no-preload-646133": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.1
	I0410 22:55:02.710816   57270 addons.go:69] Setting storage-provisioner=true in profile "no-preload-646133"
	I0410 22:55:02.710827   57270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:55:02.710854   57270 addons.go:234] Setting addon storage-provisioner=true in "no-preload-646133"
	W0410 22:55:02.710865   57270 addons.go:243] addon storage-provisioner should already be in state true
	I0410 22:55:02.710889   57270 host.go:66] Checking if "no-preload-646133" exists ...
	I0410 22:55:02.710819   57270 addons.go:69] Setting default-storageclass=true in profile "no-preload-646133"
	I0410 22:55:02.710975   57270 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-646133"
	I0410 22:55:02.710821   57270 addons.go:69] Setting metrics-server=true in profile "no-preload-646133"
	I0410 22:55:02.711079   57270 addons.go:234] Setting addon metrics-server=true in "no-preload-646133"
	W0410 22:55:02.711090   57270 addons.go:243] addon metrics-server should already be in state true
	I0410 22:55:02.711119   57270 host.go:66] Checking if "no-preload-646133" exists ...
	I0410 22:55:02.711325   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.711349   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.711352   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.711382   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.711486   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.711507   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.729696   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44631
	I0410 22:55:02.730179   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.730725   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.730751   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.731138   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35903
	I0410 22:55:02.731161   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43589
	I0410 22:55:02.731223   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.731532   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.731551   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.731920   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.731951   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.732083   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.732103   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.732266   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.732290   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.732642   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.732692   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.732892   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetState
	I0410 22:55:02.733291   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.733336   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.737245   57270 addons.go:234] Setting addon default-storageclass=true in "no-preload-646133"
	W0410 22:55:02.737274   57270 addons.go:243] addon default-storageclass should already be in state true
	I0410 22:55:02.737304   57270 host.go:66] Checking if "no-preload-646133" exists ...
	I0410 22:55:02.737674   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.737710   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.749656   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40775
	I0410 22:55:02.750133   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.751030   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.751054   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.751467   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.751642   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetState
	I0410 22:55:02.752548   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37217
	I0410 22:55:02.753119   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.753727   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:55:02.753903   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.753918   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.755963   57270 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0410 22:55:02.754443   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.757499   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42427
	I0410 22:55:02.757548   57270 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0410 22:55:02.757559   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0410 22:55:02.757576   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:55:02.757684   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetState
	I0410 22:55:02.758428   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.758880   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.758893   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.759783   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.760197   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.760224   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.760379   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:55:02.762291   57270 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:55:02.761210   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.761741   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:55:02.763819   57270 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:55:02.763907   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0410 22:55:02.763918   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:55:02.763841   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:55:02.763963   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.764040   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:55:02.764153   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:55:02.764239   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:55:02.767729   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.767758   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:55:02.767776   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.767730   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:55:02.767951   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:55:02.768100   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:55:02.768223   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:55:02.782788   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39421
	I0410 22:55:02.783161   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.783701   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.783726   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.784081   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.784347   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetState
	I0410 22:55:02.785932   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:55:02.786186   57270 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0410 22:55:02.786200   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0410 22:55:02.786217   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:55:02.789193   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.789526   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:55:02.789576   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.789837   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:55:02.790096   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:55:02.790278   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:55:02.790431   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:55:02.922239   57270 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:55:02.957665   57270 node_ready.go:35] waiting up to 6m0s for node "no-preload-646133" to be "Ready" ...
	I0410 22:55:02.981427   57270 node_ready.go:49] node "no-preload-646133" has status "Ready":"True"
	I0410 22:55:02.981449   57270 node_ready.go:38] duration metric: took 23.75134ms for node "no-preload-646133" to be "Ready" ...
	I0410 22:55:02.981458   57270 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:55:02.986557   57270 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jm2zw" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:03.024992   57270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:55:03.032744   57270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0410 22:55:03.156968   57270 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0410 22:55:03.156989   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0410 22:55:03.237497   57270 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0410 22:55:03.237522   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0410 22:55:03.274982   57270 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:55:03.275005   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0410 22:55:03.317464   57270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:55:03.512107   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.512130   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.512173   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.512198   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.512435   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.512455   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.512525   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.512530   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.512541   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.512542   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.512538   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.512551   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.512558   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.512497   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.512782   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.512799   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.512876   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.512915   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.512878   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.525688   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.525707   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.526017   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.526042   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.526057   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.905597   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.905627   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.906016   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.906081   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.906089   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.906101   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.906107   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.906353   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.906355   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.906381   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.906392   57270 addons.go:470] Verifying addon metrics-server=true in "no-preload-646133"
	I0410 22:55:03.908467   57270 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0410 22:55:03.910238   57270 addons.go:505] duration metric: took 1.20230017s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0410 22:55:05.035855   57270 pod_ready.go:102] pod "coredns-7db6d8ff4d-jm2zw" in "kube-system" namespace has status "Ready":"False"
	I0410 22:55:05.493330   57270 pod_ready.go:92] pod "coredns-7db6d8ff4d-jm2zw" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.493354   57270 pod_ready.go:81] duration metric: took 2.506773848s for pod "coredns-7db6d8ff4d-jm2zw" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.493365   57270 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-v599p" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.498568   57270 pod_ready.go:92] pod "coredns-7db6d8ff4d-v599p" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.498593   57270 pod_ready.go:81] duration metric: took 5.220548ms for pod "coredns-7db6d8ff4d-v599p" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.498604   57270 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.505133   57270 pod_ready.go:92] pod "etcd-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.505156   57270 pod_ready.go:81] duration metric: took 6.544104ms for pod "etcd-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.505165   57270 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.510391   57270 pod_ready.go:92] pod "kube-apiserver-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.510415   57270 pod_ready.go:81] duration metric: took 5.2417ms for pod "kube-apiserver-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.510427   57270 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.524717   57270 pod_ready.go:92] pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.524737   57270 pod_ready.go:81] duration metric: took 14.302445ms for pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.524747   57270 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-24vhc" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.891005   57270 pod_ready.go:92] pod "kube-proxy-24vhc" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.891029   57270 pod_ready.go:81] duration metric: took 366.275947ms for pod "kube-proxy-24vhc" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.891039   57270 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:06.291050   57270 pod_ready.go:92] pod "kube-scheduler-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:06.291075   57270 pod_ready.go:81] duration metric: took 400.028808ms for pod "kube-scheduler-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:06.291084   57270 pod_ready.go:38] duration metric: took 3.309617471s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:55:06.291101   57270 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:55:06.291165   57270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:55:06.308433   57270 api_server.go:72] duration metric: took 3.600549626s to wait for apiserver process to appear ...
	I0410 22:55:06.308461   57270 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:55:06.308479   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:55:06.312630   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 200:
	ok
	I0410 22:55:06.313434   57270 api_server.go:141] control plane version: v1.30.0-rc.1
	I0410 22:55:06.313457   57270 api_server.go:131] duration metric: took 4.989017ms to wait for apiserver health ...
	I0410 22:55:06.313466   57270 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:55:06.494780   57270 system_pods.go:59] 9 kube-system pods found
	I0410 22:55:06.494813   57270 system_pods.go:61] "coredns-7db6d8ff4d-jm2zw" [9d8b995c-717e-43a5-a963-f07a4f7a76a8] Running
	I0410 22:55:06.494820   57270 system_pods.go:61] "coredns-7db6d8ff4d-v599p" [f30c2827-5930-41d4-82b7-edfb839b3a74] Running
	I0410 22:55:06.494826   57270 system_pods.go:61] "etcd-no-preload-646133" [43f97c7f-c75c-4af4-80c1-11194210d8dd] Running
	I0410 22:55:06.494833   57270 system_pods.go:61] "kube-apiserver-no-preload-646133" [ca38242e-c714-49f7-a2df-3f26c6c37d44] Running
	I0410 22:55:06.494838   57270 system_pods.go:61] "kube-controller-manager-no-preload-646133" [a4c79943-eacf-46a5-b57a-f262c7dc97ef] Running
	I0410 22:55:06.494843   57270 system_pods.go:61] "kube-proxy-24vhc" [ca175e85-76f2-47d2-91a5-0248194a88e8] Running
	I0410 22:55:06.494848   57270 system_pods.go:61] "kube-scheduler-no-preload-646133" [fb5f38f5-0c9d-4176-8b3e-4d8c5f71c5cf] Running
	I0410 22:55:06.494856   57270 system_pods.go:61] "metrics-server-569cc877fc-bj59f" [4aace435-90be-456a-8a85-dbee0026212c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:55:06.494862   57270 system_pods.go:61] "storage-provisioner" [3232daa9-da88-4152-97c8-e86b3d50b0b8] Running
	I0410 22:55:06.494871   57270 system_pods.go:74] duration metric: took 181.399385ms to wait for pod list to return data ...
	I0410 22:55:06.494890   57270 default_sa.go:34] waiting for default service account to be created ...
	I0410 22:55:06.690158   57270 default_sa.go:45] found service account: "default"
	I0410 22:55:06.690185   57270 default_sa.go:55] duration metric: took 195.289153ms for default service account to be created ...
	I0410 22:55:06.690194   57270 system_pods.go:116] waiting for k8s-apps to be running ...
	I0410 22:55:06.893604   57270 system_pods.go:86] 9 kube-system pods found
	I0410 22:55:06.893632   57270 system_pods.go:89] "coredns-7db6d8ff4d-jm2zw" [9d8b995c-717e-43a5-a963-f07a4f7a76a8] Running
	I0410 22:55:06.893638   57270 system_pods.go:89] "coredns-7db6d8ff4d-v599p" [f30c2827-5930-41d4-82b7-edfb839b3a74] Running
	I0410 22:55:06.893642   57270 system_pods.go:89] "etcd-no-preload-646133" [43f97c7f-c75c-4af4-80c1-11194210d8dd] Running
	I0410 22:55:06.893646   57270 system_pods.go:89] "kube-apiserver-no-preload-646133" [ca38242e-c714-49f7-a2df-3f26c6c37d44] Running
	I0410 22:55:06.893651   57270 system_pods.go:89] "kube-controller-manager-no-preload-646133" [a4c79943-eacf-46a5-b57a-f262c7dc97ef] Running
	I0410 22:55:06.893656   57270 system_pods.go:89] "kube-proxy-24vhc" [ca175e85-76f2-47d2-91a5-0248194a88e8] Running
	I0410 22:55:06.893659   57270 system_pods.go:89] "kube-scheduler-no-preload-646133" [fb5f38f5-0c9d-4176-8b3e-4d8c5f71c5cf] Running
	I0410 22:55:06.893665   57270 system_pods.go:89] "metrics-server-569cc877fc-bj59f" [4aace435-90be-456a-8a85-dbee0026212c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:55:06.893670   57270 system_pods.go:89] "storage-provisioner" [3232daa9-da88-4152-97c8-e86b3d50b0b8] Running
	I0410 22:55:06.893679   57270 system_pods.go:126] duration metric: took 203.480657ms to wait for k8s-apps to be running ...
	I0410 22:55:06.893686   57270 system_svc.go:44] waiting for kubelet service to be running ....
	I0410 22:55:06.893730   57270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:55:06.909072   57270 system_svc.go:56] duration metric: took 15.374403ms WaitForService to wait for kubelet
	I0410 22:55:06.909096   57270 kubeadm.go:576] duration metric: took 4.20122533s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 22:55:06.909115   57270 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:55:07.090651   57270 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:55:07.090673   57270 node_conditions.go:123] node cpu capacity is 2
	I0410 22:55:07.090682   57270 node_conditions.go:105] duration metric: took 181.563241ms to run NodePressure ...
	I0410 22:55:07.090692   57270 start.go:240] waiting for startup goroutines ...
	I0410 22:55:07.090698   57270 start.go:245] waiting for cluster config update ...
	I0410 22:55:07.090707   57270 start.go:254] writing updated cluster config ...
	I0410 22:55:07.090957   57270 ssh_runner.go:195] Run: rm -f paused
	I0410 22:55:07.140644   57270 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.1 (minor skew: 1)
	I0410 22:55:07.142770   57270 out.go:177] * Done! kubectl is now configured to use "no-preload-646133" cluster and "default" namespace by default
	I0410 22:56:40.435994   57719 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0410 22:56:40.436123   57719 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0410 22:56:40.437810   57719 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0410 22:56:40.437872   57719 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 22:56:40.437967   57719 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 22:56:40.438082   57719 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 22:56:40.438235   57719 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 22:56:40.438321   57719 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 22:56:40.440009   57719 out.go:204]   - Generating certificates and keys ...
	I0410 22:56:40.440110   57719 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 22:56:40.440210   57719 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 22:56:40.440336   57719 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0410 22:56:40.440417   57719 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0410 22:56:40.440501   57719 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0410 22:56:40.440563   57719 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0410 22:56:40.440622   57719 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0410 22:56:40.440685   57719 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0410 22:56:40.440752   57719 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0410 22:56:40.440858   57719 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0410 22:56:40.440923   57719 kubeadm.go:309] [certs] Using the existing "sa" key
	I0410 22:56:40.441004   57719 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 22:56:40.441076   57719 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 22:56:40.441131   57719 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 22:56:40.441185   57719 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 22:56:40.441242   57719 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 22:56:40.441375   57719 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 22:56:40.441501   57719 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 22:56:40.441565   57719 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 22:56:40.441658   57719 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 22:56:40.443122   57719 out.go:204]   - Booting up control plane ...
	I0410 22:56:40.443230   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 22:56:40.443332   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 22:56:40.443431   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 22:56:40.443549   57719 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 22:56:40.443710   57719 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0410 22:56:40.443783   57719 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0410 22:56:40.443883   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.444111   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.444200   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.444429   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.444520   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.444761   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.444869   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.445124   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.445235   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.445416   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.445423   57719 kubeadm.go:309] 
	I0410 22:56:40.445465   57719 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0410 22:56:40.445512   57719 kubeadm.go:309] 		timed out waiting for the condition
	I0410 22:56:40.445520   57719 kubeadm.go:309] 
	I0410 22:56:40.445548   57719 kubeadm.go:309] 	This error is likely caused by:
	I0410 22:56:40.445595   57719 kubeadm.go:309] 		- The kubelet is not running
	I0410 22:56:40.445712   57719 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0410 22:56:40.445722   57719 kubeadm.go:309] 
	I0410 22:56:40.445880   57719 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0410 22:56:40.445931   57719 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0410 22:56:40.445967   57719 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0410 22:56:40.445972   57719 kubeadm.go:309] 
	I0410 22:56:40.446095   57719 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0410 22:56:40.446190   57719 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0410 22:56:40.446201   57719 kubeadm.go:309] 
	I0410 22:56:40.446326   57719 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0410 22:56:40.446452   57719 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0410 22:56:40.446548   57719 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0410 22:56:40.446611   57719 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0410 22:56:40.446659   57719 kubeadm.go:309] 
	I0410 22:56:40.446681   57719 kubeadm.go:393] duration metric: took 8m5.163157284s to StartCluster
	I0410 22:56:40.446805   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:56:40.446880   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:56:40.499163   57719 cri.go:89] found id: ""
	I0410 22:56:40.499196   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.499205   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:56:40.499212   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:56:40.499292   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:56:40.545429   57719 cri.go:89] found id: ""
	I0410 22:56:40.545465   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.545473   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:56:40.545479   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:56:40.545538   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:56:40.583842   57719 cri.go:89] found id: ""
	I0410 22:56:40.583870   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.583880   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:56:40.583887   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:56:40.583957   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:56:40.621054   57719 cri.go:89] found id: ""
	I0410 22:56:40.621075   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.621083   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:56:40.621091   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:56:40.621149   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:56:40.665133   57719 cri.go:89] found id: ""
	I0410 22:56:40.665161   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.665168   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:56:40.665175   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:56:40.665231   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:56:40.707490   57719 cri.go:89] found id: ""
	I0410 22:56:40.707519   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.707529   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:56:40.707536   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:56:40.707598   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:56:40.748539   57719 cri.go:89] found id: ""
	I0410 22:56:40.748565   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.748576   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:56:40.748584   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:56:40.748644   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:56:40.792326   57719 cri.go:89] found id: ""
	I0410 22:56:40.792349   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.792358   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:56:40.792366   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:56:40.792376   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:56:40.844309   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:56:40.844346   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:56:40.859678   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:56:40.859715   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:56:40.950099   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:56:40.950123   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:56:40.950141   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:56:41.073547   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:56:41.073589   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0410 22:56:41.124970   57719 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0410 22:56:41.125024   57719 out.go:239] * 
	W0410 22:56:41.125096   57719 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0410 22:56:41.125129   57719 out.go:239] * 
	W0410 22:56:41.126153   57719 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0410 22:56:41.129869   57719 out.go:177] 
	W0410 22:56:41.131207   57719 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0410 22:56:41.131286   57719 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0410 22:56:41.131326   57719 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0410 22:56:41.133049   57719 out.go:177] 
	
	
	==> CRI-O <==
	Apr 10 23:02:51 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:02:51.290806726Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=135bccc5-1231-45c4-af48-f121a333d273 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:02:51 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:02:51.291950839Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=83369e8c-a2fc-4985-aa7e-7e79d07f0a25 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:02:51 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:02:51.292395527Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712790171292365416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=83369e8c-a2fc-4985-aa7e-7e79d07f0a25 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:02:51 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:02:51.293358066Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d2d59f38-e49f-4b73-9c58-fed156906adf name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:02:51 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:02:51.293409245Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d2d59f38-e49f-4b73-9c58-fed156906adf name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:02:51 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:02:51.293650621Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195,PodSandboxId:bfca7f6e83b9d3cb6a78e30a6f9cc6c871b892275a04c98d872519b18eb5f674,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712789392385667554,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4e09f42-54ba-480e-a020-1ca071a54558,},Annotations:map[string]string{io.kubernetes.container.hash: 160f84da,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac1dc13f36ea7a5ded1554b7c6697e0987fd40c7ebf17cca475ec8b0b8cfed81,PodSandboxId:a3a388381d1b5b3faacc34b89b54e0a12b7c8f80299767ba86d54d9a14c50050,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1712789371889578822,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3238c558-cbd6-4f3a-b18c-01cf4b1b9d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 320f878f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3,PodSandboxId:e8068ec9c3c4f3650ac51ea3733b91d94bec34626d668d72d72ec69c59563d9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789369289746081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ghnvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88ebd9b0-ecf0-4037-b5b0-547dad2354ba,},Annotations:map[string]string{io.kubernetes.container.hash: e4f85df5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7,PodSandboxId:bfca7f6e83b9d3cb6a78e30a6f9cc6c871b892275a04c98d872519b18eb5f674,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712789361639886058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e4e09f42-54ba-480e-a020-1ca071a54558,},Annotations:map[string]string{io.kubernetes.container.hash: 160f84da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b,PodSandboxId:c397ba3b09882ccc1c830123edfe2babba2ead7db84fddb462ad7ec92d39efbf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712789361635283392,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5mbwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44724487-9539-4079-9fd6
-40cb70208b95,},Annotations:map[string]string{io.kubernetes.container.hash: 3db0b90,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14,PodSandboxId:46bd4334b1632938661e837855bc6ad1ef771620f76d494a084f53a7d4809179,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712789356922394650,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9929847901461a760df7cd
55eacdb8ba,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072,PodSandboxId:ddb7b6f14e3c7a6f45aea5165980feb35944ee67c926ccf9b6f710b0b4392773,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712789356871638389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccc50e24580ad579db03e5cd167e7fa1,},Annotations:map[str
ing]string{io.kubernetes.container.hash: d017430f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c,PodSandboxId:43e313fbc0995dd76558baf805ab503e2074e02a850714fac77905d3afadddb1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712789356842309835,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 863ed51eb16fa172b74df541a53ae3ab,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 8c521e92,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39,PodSandboxId:f06217fe54c3ae56d250a3d9d36b24c714c597e793a37a70b89a989b51b08918,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712789356764603205,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97afe0be93fc66092f9b2a5325da352
b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d2d59f38-e49f-4b73-9c58-fed156906adf name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:02:51 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:02:51.335611724Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a21a7790-94dd-4919-8359-558485aa0d12 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:02:51 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:02:51.336099664Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a21a7790-94dd-4919-8359-558485aa0d12 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:02:51 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:02:51.337599053Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c594a94-2d34-4244-a2b1-c090331b5291 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:02:51 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:02:51.338125250Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712790171338101414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c594a94-2d34-4244-a2b1-c090331b5291 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:02:51 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:02:51.338509474Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f5cf9821-640b-46c0-ad71-ab2ba7d3ecbe name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:02:51 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:02:51.338595235Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f5cf9821-640b-46c0-ad71-ab2ba7d3ecbe name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:02:51 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:02:51.338789743Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195,PodSandboxId:bfca7f6e83b9d3cb6a78e30a6f9cc6c871b892275a04c98d872519b18eb5f674,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712789392385667554,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4e09f42-54ba-480e-a020-1ca071a54558,},Annotations:map[string]string{io.kubernetes.container.hash: 160f84da,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac1dc13f36ea7a5ded1554b7c6697e0987fd40c7ebf17cca475ec8b0b8cfed81,PodSandboxId:a3a388381d1b5b3faacc34b89b54e0a12b7c8f80299767ba86d54d9a14c50050,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1712789371889578822,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3238c558-cbd6-4f3a-b18c-01cf4b1b9d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 320f878f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3,PodSandboxId:e8068ec9c3c4f3650ac51ea3733b91d94bec34626d668d72d72ec69c59563d9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789369289746081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ghnvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88ebd9b0-ecf0-4037-b5b0-547dad2354ba,},Annotations:map[string]string{io.kubernetes.container.hash: e4f85df5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7,PodSandboxId:bfca7f6e83b9d3cb6a78e30a6f9cc6c871b892275a04c98d872519b18eb5f674,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712789361639886058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e4e09f42-54ba-480e-a020-1ca071a54558,},Annotations:map[string]string{io.kubernetes.container.hash: 160f84da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b,PodSandboxId:c397ba3b09882ccc1c830123edfe2babba2ead7db84fddb462ad7ec92d39efbf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712789361635283392,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5mbwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44724487-9539-4079-9fd6
-40cb70208b95,},Annotations:map[string]string{io.kubernetes.container.hash: 3db0b90,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14,PodSandboxId:46bd4334b1632938661e837855bc6ad1ef771620f76d494a084f53a7d4809179,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712789356922394650,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9929847901461a760df7cd
55eacdb8ba,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072,PodSandboxId:ddb7b6f14e3c7a6f45aea5165980feb35944ee67c926ccf9b6f710b0b4392773,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712789356871638389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccc50e24580ad579db03e5cd167e7fa1,},Annotations:map[str
ing]string{io.kubernetes.container.hash: d017430f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c,PodSandboxId:43e313fbc0995dd76558baf805ab503e2074e02a850714fac77905d3afadddb1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712789356842309835,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 863ed51eb16fa172b74df541a53ae3ab,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 8c521e92,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39,PodSandboxId:f06217fe54c3ae56d250a3d9d36b24c714c597e793a37a70b89a989b51b08918,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712789356764603205,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97afe0be93fc66092f9b2a5325da352
b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f5cf9821-640b-46c0-ad71-ab2ba7d3ecbe name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:02:51 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:02:51.342411877Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=00920163-395c-41a3-813b-1f3ea231afb8 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 10 23:02:51 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:02:51.343189933Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a3a388381d1b5b3faacc34b89b54e0a12b7c8f80299767ba86d54d9a14c50050,Metadata:&PodSandboxMetadata{Name:busybox,Uid:3238c558-cbd6-4f3a-b18c-01cf4b1b9d0c,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712789368978782351,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3238c558-cbd6-4f3a-b18c-01cf4b1b9d0c,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-10T22:49:21.134596164Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e8068ec9c3c4f3650ac51ea3733b91d94bec34626d668d72d72ec69c59563d9d,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-ghnvx,Uid:88ebd9b0-ecf0-4037-b5b0-547dad2354ba,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:171278
9368968713534,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-ghnvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88ebd9b0-ecf0-4037-b5b0-547dad2354ba,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-10T22:49:21.134587351Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0454883ef98cafab75880275ba90800e4a3658c3be47cc4b1010269f9628b89e,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-9l2hc,Uid:2f5cda2f-4d8f-4798-954e-5ef588f2b88f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712789367169099010,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-9l2hc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f5cda2f-4d8f-4798-954e-5ef588f2b88f,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-10
T22:49:21.134600233Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c397ba3b09882ccc1c830123edfe2babba2ead7db84fddb462ad7ec92d39efbf,Metadata:&PodSandboxMetadata{Name:kube-proxy-5mbwx,Uid:44724487-9539-4079-9fd6-40cb70208b95,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712789361461062558,Labels:map[string]string{controller-revision-hash: 7659797656,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-5mbwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44724487-9539-4079-9fd6-40cb70208b95,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-10T22:49:21.134599230Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bfca7f6e83b9d3cb6a78e30a6f9cc6c871b892275a04c98d872519b18eb5f674,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e4e09f42-54ba-480e-a020-1ca071a54558,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712789361447388876,Labels:
map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4e09f42-54ba-480e-a020-1ca071a54558,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,ku
bernetes.io/config.seen: 2024-04-10T22:49:21.134594768Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:43e313fbc0995dd76558baf805ab503e2074e02a850714fac77905d3afadddb1,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-519831,Uid:863ed51eb16fa172b74df541a53ae3ab,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712789356610835677,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 863ed51eb16fa172b74df541a53ae3ab,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.170:8444,kubernetes.io/config.hash: 863ed51eb16fa172b74df541a53ae3ab,kubernetes.io/config.seen: 2024-04-10T22:49:16.126709948Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ddb7b6f14e3c7a6f45aea5165980feb35944ee67c926ccf9b6f710b0b43927
73,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-519831,Uid:ccc50e24580ad579db03e5cd167e7fa1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712789356609597911,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccc50e24580ad579db03e5cd167e7fa1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.170:2379,kubernetes.io/config.hash: ccc50e24580ad579db03e5cd167e7fa1,kubernetes.io/config.seen: 2024-04-10T22:49:16.126705966Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:46bd4334b1632938661e837855bc6ad1ef771620f76d494a084f53a7d4809179,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-519831,Uid:9929847901461a760df7cd55eacdb8ba,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712789356588799651,Labels:map[string]str
ing{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9929847901461a760df7cd55eacdb8ba,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9929847901461a760df7cd55eacdb8ba,kubernetes.io/config.seen: 2024-04-10T22:49:16.126711943Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f06217fe54c3ae56d250a3d9d36b24c714c597e793a37a70b89a989b51b08918,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-519831,Uid:97afe0be93fc66092f9b2a5325da352b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712789356586806431,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97afe0be93fc66092f9b2a5325da352b,tier: control-p
lane,},Annotations:map[string]string{kubernetes.io/config.hash: 97afe0be93fc66092f9b2a5325da352b,kubernetes.io/config.seen: 2024-04-10T22:49:16.126711193Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=00920163-395c-41a3-813b-1f3ea231afb8 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 10 23:02:51 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:02:51.344094795Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a9706242-6890-47dd-940b-4570c4ffda17 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:02:51 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:02:51.344142760Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a9706242-6890-47dd-940b-4570c4ffda17 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:02:51 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:02:51.344314289Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195,PodSandboxId:bfca7f6e83b9d3cb6a78e30a6f9cc6c871b892275a04c98d872519b18eb5f674,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712789392385667554,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4e09f42-54ba-480e-a020-1ca071a54558,},Annotations:map[string]string{io.kubernetes.container.hash: 160f84da,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac1dc13f36ea7a5ded1554b7c6697e0987fd40c7ebf17cca475ec8b0b8cfed81,PodSandboxId:a3a388381d1b5b3faacc34b89b54e0a12b7c8f80299767ba86d54d9a14c50050,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1712789371889578822,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3238c558-cbd6-4f3a-b18c-01cf4b1b9d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 320f878f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3,PodSandboxId:e8068ec9c3c4f3650ac51ea3733b91d94bec34626d668d72d72ec69c59563d9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789369289746081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ghnvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88ebd9b0-ecf0-4037-b5b0-547dad2354ba,},Annotations:map[string]string{io.kubernetes.container.hash: e4f85df5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7,PodSandboxId:bfca7f6e83b9d3cb6a78e30a6f9cc6c871b892275a04c98d872519b18eb5f674,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712789361639886058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e4e09f42-54ba-480e-a020-1ca071a54558,},Annotations:map[string]string{io.kubernetes.container.hash: 160f84da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b,PodSandboxId:c397ba3b09882ccc1c830123edfe2babba2ead7db84fddb462ad7ec92d39efbf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712789361635283392,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5mbwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44724487-9539-4079-9fd6
-40cb70208b95,},Annotations:map[string]string{io.kubernetes.container.hash: 3db0b90,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14,PodSandboxId:46bd4334b1632938661e837855bc6ad1ef771620f76d494a084f53a7d4809179,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712789356922394650,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9929847901461a760df7cd
55eacdb8ba,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072,PodSandboxId:ddb7b6f14e3c7a6f45aea5165980feb35944ee67c926ccf9b6f710b0b4392773,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712789356871638389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccc50e24580ad579db03e5cd167e7fa1,},Annotations:map[str
ing]string{io.kubernetes.container.hash: d017430f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c,PodSandboxId:43e313fbc0995dd76558baf805ab503e2074e02a850714fac77905d3afadddb1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712789356842309835,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 863ed51eb16fa172b74df541a53ae3ab,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 8c521e92,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39,PodSandboxId:f06217fe54c3ae56d250a3d9d36b24c714c597e793a37a70b89a989b51b08918,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712789356764603205,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97afe0be93fc66092f9b2a5325da352
b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a9706242-6890-47dd-940b-4570c4ffda17 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:02:51 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:02:51.383277574Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f1445c10-4b57-4527-9ff1-c465d98c75c7 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:02:51 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:02:51.383354843Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f1445c10-4b57-4527-9ff1-c465d98c75c7 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:02:51 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:02:51.384858712Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=019f1ea7-dfe0-46ea-8f5e-3d9fc85ce695 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:02:51 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:02:51.385414418Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712790171385386180,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=019f1ea7-dfe0-46ea-8f5e-3d9fc85ce695 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:02:51 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:02:51.385907421Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fde2b042-2684-40e1-be43-d5bb0639c09c name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:02:51 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:02:51.385975090Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fde2b042-2684-40e1-be43-d5bb0639c09c name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:02:51 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:02:51.386254277Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195,PodSandboxId:bfca7f6e83b9d3cb6a78e30a6f9cc6c871b892275a04c98d872519b18eb5f674,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712789392385667554,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4e09f42-54ba-480e-a020-1ca071a54558,},Annotations:map[string]string{io.kubernetes.container.hash: 160f84da,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac1dc13f36ea7a5ded1554b7c6697e0987fd40c7ebf17cca475ec8b0b8cfed81,PodSandboxId:a3a388381d1b5b3faacc34b89b54e0a12b7c8f80299767ba86d54d9a14c50050,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1712789371889578822,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3238c558-cbd6-4f3a-b18c-01cf4b1b9d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 320f878f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3,PodSandboxId:e8068ec9c3c4f3650ac51ea3733b91d94bec34626d668d72d72ec69c59563d9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789369289746081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ghnvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88ebd9b0-ecf0-4037-b5b0-547dad2354ba,},Annotations:map[string]string{io.kubernetes.container.hash: e4f85df5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7,PodSandboxId:bfca7f6e83b9d3cb6a78e30a6f9cc6c871b892275a04c98d872519b18eb5f674,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712789361639886058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e4e09f42-54ba-480e-a020-1ca071a54558,},Annotations:map[string]string{io.kubernetes.container.hash: 160f84da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b,PodSandboxId:c397ba3b09882ccc1c830123edfe2babba2ead7db84fddb462ad7ec92d39efbf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712789361635283392,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5mbwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44724487-9539-4079-9fd6
-40cb70208b95,},Annotations:map[string]string{io.kubernetes.container.hash: 3db0b90,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14,PodSandboxId:46bd4334b1632938661e837855bc6ad1ef771620f76d494a084f53a7d4809179,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712789356922394650,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9929847901461a760df7cd
55eacdb8ba,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072,PodSandboxId:ddb7b6f14e3c7a6f45aea5165980feb35944ee67c926ccf9b6f710b0b4392773,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712789356871638389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccc50e24580ad579db03e5cd167e7fa1,},Annotations:map[str
ing]string{io.kubernetes.container.hash: d017430f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c,PodSandboxId:43e313fbc0995dd76558baf805ab503e2074e02a850714fac77905d3afadddb1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712789356842309835,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 863ed51eb16fa172b74df541a53ae3ab,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 8c521e92,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39,PodSandboxId:f06217fe54c3ae56d250a3d9d36b24c714c597e793a37a70b89a989b51b08918,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712789356764603205,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97afe0be93fc66092f9b2a5325da352
b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fde2b042-2684-40e1-be43-d5bb0639c09c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3e97b78e0d5a1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   bfca7f6e83b9d       storage-provisioner
	ac1dc13f36ea7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   a3a388381d1b5       busybox
	d0547fcd34655       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   e8068ec9c3c4f       coredns-76f75df574-ghnvx
	912eddb6d12e8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   bfca7f6e83b9d       storage-provisioner
	7c920ae26b3cc       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      13 minutes ago      Running             kube-proxy                1                   c397ba3b09882       kube-proxy-5mbwx
	b9d427d7dee4f       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      13 minutes ago      Running             kube-scheduler            1                   46bd4334b1632       kube-scheduler-default-k8s-diff-port-519831
	34b1b1f972a8e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   ddb7b6f14e3c7       etcd-default-k8s-diff-port-519831
	74618e834b629       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      13 minutes ago      Running             kube-apiserver            1                   43e313fbc0995       kube-apiserver-default-k8s-diff-port-519831
	c9b5f1abd2321       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      13 minutes ago      Running             kube-controller-manager   1                   f06217fe54c3a       kube-controller-manager-default-k8s-diff-port-519831
	
	
	==> coredns [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:33418 - 55032 "HINFO IN 1503125876999987611.2945278978932795479. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01765054s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-519831
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-519831
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2
	                    minikube.k8s.io/name=default-k8s-diff-port-519831
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_10T22_43_47_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Apr 2024 22:43:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-519831
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Apr 2024 23:02:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Apr 2024 23:00:03 +0000   Wed, 10 Apr 2024 22:43:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Apr 2024 23:00:03 +0000   Wed, 10 Apr 2024 22:43:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Apr 2024 23:00:03 +0000   Wed, 10 Apr 2024 22:43:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Apr 2024 23:00:03 +0000   Wed, 10 Apr 2024 22:49:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.170
	  Hostname:    default-k8s-diff-port-519831
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e6a949113b4840c7820f576b4306ecaf
	  System UUID:                e6a94911-3b48-40c7-820f-576b4306ecaf
	  Boot ID:                    db3de20c-9744-477a-b762-2fb75ae1f894
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-76f75df574-ghnvx                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     18m
	  kube-system                 etcd-default-k8s-diff-port-519831                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kube-apiserver-default-k8s-diff-port-519831             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-519831    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-5mbwx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-default-k8s-diff-port-519831             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 metrics-server-57f55c9bc5-9l2hc                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node default-k8s-diff-port-519831 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node default-k8s-diff-port-519831 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node default-k8s-diff-port-519831 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node default-k8s-diff-port-519831 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node default-k8s-diff-port-519831 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     19m                kubelet          Node default-k8s-diff-port-519831 status is now: NodeHasSufficientPID
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeReady                19m                kubelet          Node default-k8s-diff-port-519831 status is now: NodeReady
	  Normal  RegisteredNode           18m                node-controller  Node default-k8s-diff-port-519831 event: Registered Node default-k8s-diff-port-519831 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-519831 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-519831 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-519831 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-519831 event: Registered Node default-k8s-diff-port-519831 in Controller
	
	
	==> dmesg <==
	[Apr10 22:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052452] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045077] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.761794] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.915689] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.649830] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr10 22:49] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.064376] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069716] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.179497] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.172233] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.318231] systemd-fstab-generator[707]: Ignoring "noauto" option for root device
	[  +4.893503] systemd-fstab-generator[806]: Ignoring "noauto" option for root device
	[  +0.071288] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.213109] systemd-fstab-generator[931]: Ignoring "noauto" option for root device
	[  +5.645830] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.402995] systemd-fstab-generator[1563]: Ignoring "noauto" option for root device
	[  +3.260151] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.322773] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072] <==
	{"level":"info","ts":"2024-04-10T22:49:17.479799Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"dc4e5d85287e9b45","local-member-id":"dbd92f24fe4fe75a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-10T22:49:17.479845Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-10T22:49:17.491623Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-10T22:49:17.491852Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"dbd92f24fe4fe75a","initial-advertise-peer-urls":["https://192.168.72.170:2380"],"listen-peer-urls":["https://192.168.72.170:2380"],"advertise-client-urls":["https://192.168.72.170:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.170:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-10T22:49:17.491902Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-10T22:49:17.497491Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.170:2380"}
	{"level":"info","ts":"2024-04-10T22:49:17.497531Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.170:2380"}
	{"level":"info","ts":"2024-04-10T22:49:19.108547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dbd92f24fe4fe75a is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-10T22:49:19.108717Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dbd92f24fe4fe75a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-10T22:49:19.108766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dbd92f24fe4fe75a received MsgPreVoteResp from dbd92f24fe4fe75a at term 2"}
	{"level":"info","ts":"2024-04-10T22:49:19.108809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dbd92f24fe4fe75a became candidate at term 3"}
	{"level":"info","ts":"2024-04-10T22:49:19.108836Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dbd92f24fe4fe75a received MsgVoteResp from dbd92f24fe4fe75a at term 3"}
	{"level":"info","ts":"2024-04-10T22:49:19.108874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dbd92f24fe4fe75a became leader at term 3"}
	{"level":"info","ts":"2024-04-10T22:49:19.108903Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dbd92f24fe4fe75a elected leader dbd92f24fe4fe75a at term 3"}
	{"level":"info","ts":"2024-04-10T22:49:19.197427Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"dbd92f24fe4fe75a","local-member-attributes":"{Name:default-k8s-diff-port-519831 ClientURLs:[https://192.168.72.170:2379]}","request-path":"/0/members/dbd92f24fe4fe75a/attributes","cluster-id":"dc4e5d85287e9b45","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-10T22:49:19.197445Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-10T22:49:19.197479Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-10T22:49:19.19769Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-10T22:49:19.198265Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-10T22:49:19.201951Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.170:2379"}
	{"level":"info","ts":"2024-04-10T22:49:19.206541Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-10T22:49:33.316528Z","caller":"traceutil/trace.go:171","msg":"trace[642332303] transaction","detail":"{read_only:false; response_revision:573; number_of_response:1; }","duration":"118.035105ms","start":"2024-04-10T22:49:33.198479Z","end":"2024-04-10T22:49:33.316514Z","steps":["trace[642332303] 'process raft request'  (duration: 117.918675ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-10T22:59:19.245292Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":830}
	{"level":"info","ts":"2024-04-10T22:59:19.256752Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":830,"took":"10.768809ms","hash":132352172,"current-db-size-bytes":2588672,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2588672,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-04-10T22:59:19.256861Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":132352172,"revision":830,"compact-revision":-1}
	
	
	==> kernel <==
	 23:02:51 up 14 min,  0 users,  load average: 0.05, 0.15, 0.10
	Linux default-k8s-diff-port-519831 5.10.207 #1 SMP Wed Apr 10 14:57:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c] <==
	I0410 22:57:21.710773       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0410 22:59:20.711650       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 22:59:20.711804       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0410 22:59:21.712263       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 22:59:21.712370       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0410 22:59:21.712401       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0410 22:59:21.712315       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 22:59:21.712549       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0410 22:59:21.713477       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0410 23:00:21.712746       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 23:00:21.712978       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0410 23:00:21.713071       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0410 23:00:21.714113       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 23:00:21.714192       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0410 23:00:21.714232       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0410 23:02:21.713665       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 23:02:21.713751       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0410 23:02:21.713764       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0410 23:02:21.714831       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 23:02:21.714981       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0410 23:02:21.715109       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39] <==
	I0410 22:57:03.916998       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 22:57:33.354945       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 22:57:33.924983       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 22:58:03.360321       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 22:58:03.932598       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 22:58:33.366287       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 22:58:33.940420       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 22:59:03.371305       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 22:59:03.947942       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 22:59:33.376174       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 22:59:33.957835       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:00:03.381444       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:00:03.966358       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:00:33.387281       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:00:33.975711       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0410 23:00:37.179304       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="328.709µs"
	I0410 23:00:48.177230       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="219.178µs"
	E0410 23:01:03.393823       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:01:03.984973       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:01:33.399501       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:01:33.995002       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:02:03.405217       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:02:04.006988       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:02:33.411533       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:02:34.020837       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b] <==
	I0410 22:49:21.933511       1 server_others.go:72] "Using iptables proxy"
	I0410 22:49:21.944869       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.72.170"]
	I0410 22:49:21.986389       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0410 22:49:21.986437       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0410 22:49:21.986451       1 server_others.go:168] "Using iptables Proxier"
	I0410 22:49:21.989702       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0410 22:49:21.989921       1 server.go:865] "Version info" version="v1.29.3"
	I0410 22:49:21.989970       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0410 22:49:21.991943       1 config.go:315] "Starting node config controller"
	I0410 22:49:21.991978       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0410 22:49:21.993777       1 config.go:188] "Starting service config controller"
	I0410 22:49:21.993852       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0410 22:49:21.998527       1 config.go:97] "Starting endpoint slice config controller"
	I0410 22:49:21.998653       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0410 22:49:22.092973       1 shared_informer.go:318] Caches are synced for node config
	I0410 22:49:22.099367       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0410 22:49:22.099431       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14] <==
	I0410 22:49:18.217487       1 serving.go:380] Generated self-signed cert in-memory
	W0410 22:49:20.678578       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0410 22:49:20.678623       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0410 22:49:20.678637       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0410 22:49:20.678647       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0410 22:49:20.711086       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0410 22:49:20.711202       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0410 22:49:20.717074       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0410 22:49:20.717258       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0410 22:49:20.720149       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0410 22:49:20.720334       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0410 22:49:20.819163       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 10 23:00:25 default-k8s-diff-port-519831 kubelet[938]: E0410 23:00:25.175598     938 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Apr 10 23:00:25 default-k8s-diff-port-519831 kubelet[938]: E0410 23:00:25.175808     938 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Apr 10 23:00:25 default-k8s-diff-port-519831 kubelet[938]: E0410 23:00:25.176443     938 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5d4k9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:
&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessag
ePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-9l2hc_kube-system(2f5cda2f-4d8f-4798-954e-5ef588f2b88f): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Apr 10 23:00:25 default-k8s-diff-port-519831 kubelet[938]: E0410 23:00:25.176654     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-9l2hc" podUID="2f5cda2f-4d8f-4798-954e-5ef588f2b88f"
	Apr 10 23:00:37 default-k8s-diff-port-519831 kubelet[938]: E0410 23:00:37.161270     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9l2hc" podUID="2f5cda2f-4d8f-4798-954e-5ef588f2b88f"
	Apr 10 23:00:48 default-k8s-diff-port-519831 kubelet[938]: E0410 23:00:48.162821     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9l2hc" podUID="2f5cda2f-4d8f-4798-954e-5ef588f2b88f"
	Apr 10 23:01:03 default-k8s-diff-port-519831 kubelet[938]: E0410 23:01:03.161158     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9l2hc" podUID="2f5cda2f-4d8f-4798-954e-5ef588f2b88f"
	Apr 10 23:01:16 default-k8s-diff-port-519831 kubelet[938]: E0410 23:01:16.184970     938 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 10 23:01:16 default-k8s-diff-port-519831 kubelet[938]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 10 23:01:16 default-k8s-diff-port-519831 kubelet[938]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 10 23:01:16 default-k8s-diff-port-519831 kubelet[938]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 10 23:01:16 default-k8s-diff-port-519831 kubelet[938]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 10 23:01:17 default-k8s-diff-port-519831 kubelet[938]: E0410 23:01:17.160761     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9l2hc" podUID="2f5cda2f-4d8f-4798-954e-5ef588f2b88f"
	Apr 10 23:01:31 default-k8s-diff-port-519831 kubelet[938]: E0410 23:01:31.163217     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9l2hc" podUID="2f5cda2f-4d8f-4798-954e-5ef588f2b88f"
	Apr 10 23:01:45 default-k8s-diff-port-519831 kubelet[938]: E0410 23:01:45.160871     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9l2hc" podUID="2f5cda2f-4d8f-4798-954e-5ef588f2b88f"
	Apr 10 23:01:56 default-k8s-diff-port-519831 kubelet[938]: E0410 23:01:56.160407     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9l2hc" podUID="2f5cda2f-4d8f-4798-954e-5ef588f2b88f"
	Apr 10 23:02:08 default-k8s-diff-port-519831 kubelet[938]: E0410 23:02:08.160713     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9l2hc" podUID="2f5cda2f-4d8f-4798-954e-5ef588f2b88f"
	Apr 10 23:02:16 default-k8s-diff-port-519831 kubelet[938]: E0410 23:02:16.185412     938 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 10 23:02:16 default-k8s-diff-port-519831 kubelet[938]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 10 23:02:16 default-k8s-diff-port-519831 kubelet[938]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 10 23:02:16 default-k8s-diff-port-519831 kubelet[938]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 10 23:02:16 default-k8s-diff-port-519831 kubelet[938]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 10 23:02:19 default-k8s-diff-port-519831 kubelet[938]: E0410 23:02:19.159863     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9l2hc" podUID="2f5cda2f-4d8f-4798-954e-5ef588f2b88f"
	Apr 10 23:02:34 default-k8s-diff-port-519831 kubelet[938]: E0410 23:02:34.160496     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9l2hc" podUID="2f5cda2f-4d8f-4798-954e-5ef588f2b88f"
	Apr 10 23:02:45 default-k8s-diff-port-519831 kubelet[938]: E0410 23:02:45.160863     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9l2hc" podUID="2f5cda2f-4d8f-4798-954e-5ef588f2b88f"
	
	
	==> storage-provisioner [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195] <==
	I0410 22:49:52.497299       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0410 22:49:52.507960       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0410 22:49:52.509176       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0410 22:50:09.915410       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0410 22:50:09.916253       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-519831_a9bc36bb-fa90-48cd-80dd-aa4c941ecc2b!
	I0410 22:50:09.917353       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c0ec6d66-6487-4c61-bc0a-39f866affbb8", APIVersion:"v1", ResourceVersion:"610", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-519831_a9bc36bb-fa90-48cd-80dd-aa4c941ecc2b became leader
	I0410 22:50:10.017379       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-519831_a9bc36bb-fa90-48cd-80dd-aa4c941ecc2b!
	
	
	==> storage-provisioner [912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7] <==
	I0410 22:49:21.912544       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0410 22:49:51.920616       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-519831 -n default-k8s-diff-port-519831
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-519831 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-9l2hc
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-519831 describe pod metrics-server-57f55c9bc5-9l2hc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-519831 describe pod metrics-server-57f55c9bc5-9l2hc: exit status 1 (64.199677ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-9l2hc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-519831 describe pod metrics-server-57f55c9bc5-9l2hc: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-706500 -n embed-certs-706500
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-10 23:03:11.149180887 +0000 UTC m=+5713.385610170
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-706500 -n embed-certs-706500
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-706500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-706500 logs -n 25: (2.263504528s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| stop    | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC | 10 Apr 24 22:40 UTC |
	| start   | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC | 10 Apr 24 22:40 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.1                      |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-646133             | no-preload-646133            | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC | 10 Apr 24 22:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-646133                                   | no-preload-646133            | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC | 10 Apr 24 22:41 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.1                      |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:41 UTC | 10 Apr 24 22:41 UTC |
	| start   | -p embed-certs-706500                                  | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:41 UTC | 10 Apr 24 22:42 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-706500            | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:42 UTC | 10 Apr 24 22:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-706500                                  | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-862528        | old-k8s-version-862528       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:42 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-646133                  | no-preload-646133            | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| delete  | -p cert-expiration-464519                              | cert-expiration-464519       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:43 UTC |
	| delete  | -p                                                     | disable-driver-mounts-676292 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:43 UTC |
	|         | disable-driver-mounts-676292                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:44 UTC |
	|         | default-k8s-diff-port-519831                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| start   | -p no-preload-646133                                   | no-preload-646133            | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:55 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.1                      |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-862528                              | old-k8s-version-862528       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-862528             | old-k8s-version-862528       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC | 10 Apr 24 22:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-862528                              | old-k8s-version-862528       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-519831  | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC | 10 Apr 24 22:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC |                     |
	|         | default-k8s-diff-port-519831                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-706500                 | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-706500                                  | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC | 10 Apr 24 22:54 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-519831       | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:46 UTC | 10 Apr 24 22:53 UTC |
	|         | default-k8s-diff-port-519831                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/10 22:46:47
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0410 22:46:47.395706   58701 out.go:291] Setting OutFile to fd 1 ...
	I0410 22:46:47.395991   58701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:46:47.396002   58701 out.go:304] Setting ErrFile to fd 2...
	I0410 22:46:47.396019   58701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:46:47.396208   58701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 22:46:47.396802   58701 out.go:298] Setting JSON to false
	I0410 22:46:47.397726   58701 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5350,"bootTime":1712783858,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0410 22:46:47.397786   58701 start.go:139] virtualization: kvm guest
	I0410 22:46:47.400191   58701 out.go:177] * [default-k8s-diff-port-519831] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0410 22:46:47.401578   58701 notify.go:220] Checking for updates...
	I0410 22:46:47.402880   58701 out.go:177]   - MINIKUBE_LOCATION=18610
	I0410 22:46:47.404311   58701 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0410 22:46:47.405790   58701 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:46:47.407012   58701 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 22:46:47.408130   58701 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0410 22:46:47.409497   58701 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0410 22:46:47.411183   58701 config.go:182] Loaded profile config "default-k8s-diff-port-519831": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:46:47.411591   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:46:47.411632   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:46:47.426322   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42887
	I0410 22:46:47.426759   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:46:47.427345   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:46:47.427366   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:46:47.427716   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:46:47.427926   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:46:47.428221   58701 driver.go:392] Setting default libvirt URI to qemu:///system
	I0410 22:46:47.428646   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:46:47.428696   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:46:47.444105   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40099
	I0410 22:46:47.444537   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:46:47.445035   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:46:47.445058   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:46:47.445398   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:46:47.445592   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:46:47.480451   58701 out.go:177] * Using the kvm2 driver based on existing profile
	I0410 22:46:47.481837   58701 start.go:297] selected driver: kvm2
	I0410 22:46:47.481852   58701 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-519831 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-519831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.170 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:46:47.481985   58701 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0410 22:46:47.482657   58701 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 22:46:47.482750   58701 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18610-5679/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0410 22:46:47.498330   58701 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0410 22:46:47.498668   58701 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 22:46:47.498735   58701 cni.go:84] Creating CNI manager for ""
	I0410 22:46:47.498748   58701 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:46:47.498784   58701 start.go:340] cluster config:
	{Name:default-k8s-diff-port-519831 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-519831 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.170 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:46:47.498877   58701 iso.go:125] acquiring lock: {Name:mk5a268a8a4dbf02629446a098c1de60574dce10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 22:46:47.500723   58701 out.go:177] * Starting "default-k8s-diff-port-519831" primary control-plane node in "default-k8s-diff-port-519831" cluster
	I0410 22:46:47.180678   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:46:47.501967   58701 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 22:46:47.502009   58701 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0410 22:46:47.502030   58701 cache.go:56] Caching tarball of preloaded images
	I0410 22:46:47.502108   58701 preload.go:173] Found /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0410 22:46:47.502118   58701 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0410 22:46:47.502202   58701 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/config.json ...
	I0410 22:46:47.502366   58701 start.go:360] acquireMachinesLock for default-k8s-diff-port-519831: {Name:mkcfcfa8b01b63726303cd37d33f01d452118635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0410 22:46:50.252732   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:46:56.332647   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:46:59.404660   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:05.484717   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:08.556632   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:14.636753   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:17.708788   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:23.788661   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:26.860683   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:32.940630   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:36.012689   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:42.092749   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:45.164706   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:51.244682   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:54.316652   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:48:00.396637   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:48:03.468672   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:48:06.472768   57719 start.go:364] duration metric: took 4m5.937893783s to acquireMachinesLock for "old-k8s-version-862528"
	I0410 22:48:06.472833   57719 start.go:96] Skipping create...Using existing machine configuration
	I0410 22:48:06.472852   57719 fix.go:54] fixHost starting: 
	I0410 22:48:06.473157   57719 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:48:06.473186   57719 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:48:06.488728   57719 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41927
	I0410 22:48:06.489157   57719 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:48:06.489590   57719 main.go:141] libmachine: Using API Version  1
	I0410 22:48:06.489612   57719 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:48:06.490011   57719 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:48:06.490171   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:06.490337   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetState
	I0410 22:48:06.491997   57719 fix.go:112] recreateIfNeeded on old-k8s-version-862528: state=Stopped err=<nil>
	I0410 22:48:06.492030   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	W0410 22:48:06.492234   57719 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 22:48:06.493891   57719 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-862528" ...
	I0410 22:48:06.469869   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:48:06.469904   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetMachineName
	I0410 22:48:06.470235   57270 buildroot.go:166] provisioning hostname "no-preload-646133"
	I0410 22:48:06.470261   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetMachineName
	I0410 22:48:06.470529   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:48:06.472589   57270 machine.go:97] duration metric: took 4m35.561692081s to provisionDockerMachine
	I0410 22:48:06.472636   57270 fix.go:56] duration metric: took 4m35.586484815s for fixHost
	I0410 22:48:06.472646   57270 start.go:83] releasing machines lock for "no-preload-646133", held for 4m35.586540892s
	W0410 22:48:06.472671   57270 start.go:713] error starting host: provision: host is not running
	W0410 22:48:06.472773   57270 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0410 22:48:06.472785   57270 start.go:728] Will try again in 5 seconds ...
	I0410 22:48:06.495233   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .Start
	I0410 22:48:06.495416   57719 main.go:141] libmachine: (old-k8s-version-862528) Ensuring networks are active...
	I0410 22:48:06.496254   57719 main.go:141] libmachine: (old-k8s-version-862528) Ensuring network default is active
	I0410 22:48:06.496589   57719 main.go:141] libmachine: (old-k8s-version-862528) Ensuring network mk-old-k8s-version-862528 is active
	I0410 22:48:06.497002   57719 main.go:141] libmachine: (old-k8s-version-862528) Getting domain xml...
	I0410 22:48:06.497751   57719 main.go:141] libmachine: (old-k8s-version-862528) Creating domain...
	I0410 22:48:07.722703   57719 main.go:141] libmachine: (old-k8s-version-862528) Waiting to get IP...
	I0410 22:48:07.723942   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:07.724373   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:07.724451   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:07.724338   59021 retry.go:31] will retry after 284.455366ms: waiting for machine to come up
	I0410 22:48:08.011077   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:08.011598   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:08.011628   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:08.011545   59021 retry.go:31] will retry after 337.946102ms: waiting for machine to come up
	I0410 22:48:08.351219   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:08.351725   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:08.351744   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:08.351691   59021 retry.go:31] will retry after 454.774669ms: waiting for machine to come up
	I0410 22:48:08.808516   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:08.808953   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:08.808991   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:08.808893   59021 retry.go:31] will retry after 484.667282ms: waiting for machine to come up
	I0410 22:48:09.295665   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:09.296127   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:09.296148   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:09.296083   59021 retry.go:31] will retry after 515.00238ms: waiting for machine to come up
	I0410 22:48:09.812855   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:09.813337   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:09.813362   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:09.813289   59021 retry.go:31] will retry after 596.67118ms: waiting for machine to come up
	I0410 22:48:10.411103   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:10.411616   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:10.411640   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:10.411568   59021 retry.go:31] will retry after 1.035822512s: waiting for machine to come up
	I0410 22:48:11.473748   57270 start.go:360] acquireMachinesLock for no-preload-646133: {Name:mkcfcfa8b01b63726303cd37d33f01d452118635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0410 22:48:11.448894   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:11.449358   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:11.449388   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:11.449315   59021 retry.go:31] will retry after 1.258446774s: waiting for machine to come up
	I0410 22:48:12.709048   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:12.709587   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:12.709618   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:12.709530   59021 retry.go:31] will retry after 1.149380432s: waiting for machine to come up
	I0410 22:48:13.860550   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:13.861084   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:13.861110   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:13.861028   59021 retry.go:31] will retry after 1.733388735s: waiting for machine to come up
	I0410 22:48:15.595870   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:15.596447   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:15.596487   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:15.596343   59021 retry.go:31] will retry after 2.536794123s: waiting for machine to come up
	I0410 22:48:18.135592   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:18.136099   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:18.136128   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:18.136056   59021 retry.go:31] will retry after 3.390395523s: waiting for machine to come up
	I0410 22:48:21.528518   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:21.528964   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:21.529008   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:21.528906   59021 retry.go:31] will retry after 4.165145769s: waiting for machine to come up
	I0410 22:48:26.977460   58186 start.go:364] duration metric: took 3m29.815175662s to acquireMachinesLock for "embed-certs-706500"
	I0410 22:48:26.977524   58186 start.go:96] Skipping create...Using existing machine configuration
	I0410 22:48:26.977532   58186 fix.go:54] fixHost starting: 
	I0410 22:48:26.977935   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:48:26.977965   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:48:26.994175   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36291
	I0410 22:48:26.994552   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:48:26.995016   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:48:26.995040   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:48:26.995447   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:48:26.995652   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:26.995826   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetState
	I0410 22:48:26.997547   58186 fix.go:112] recreateIfNeeded on embed-certs-706500: state=Stopped err=<nil>
	I0410 22:48:26.997580   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	W0410 22:48:26.997902   58186 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 22:48:27.000500   58186 out.go:177] * Restarting existing kvm2 VM for "embed-certs-706500" ...
	I0410 22:48:27.002204   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Start
	I0410 22:48:27.002398   58186 main.go:141] libmachine: (embed-certs-706500) Ensuring networks are active...
	I0410 22:48:27.003133   58186 main.go:141] libmachine: (embed-certs-706500) Ensuring network default is active
	I0410 22:48:27.003465   58186 main.go:141] libmachine: (embed-certs-706500) Ensuring network mk-embed-certs-706500 is active
	I0410 22:48:27.003863   58186 main.go:141] libmachine: (embed-certs-706500) Getting domain xml...
	I0410 22:48:27.004603   58186 main.go:141] libmachine: (embed-certs-706500) Creating domain...
	I0410 22:48:25.699595   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.700129   57719 main.go:141] libmachine: (old-k8s-version-862528) Found IP for machine: 192.168.61.178
	I0410 22:48:25.700159   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has current primary IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.700166   57719 main.go:141] libmachine: (old-k8s-version-862528) Reserving static IP address...
	I0410 22:48:25.700654   57719 main.go:141] libmachine: (old-k8s-version-862528) Reserved static IP address: 192.168.61.178
	I0410 22:48:25.700676   57719 main.go:141] libmachine: (old-k8s-version-862528) Waiting for SSH to be available...
	I0410 22:48:25.700704   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "old-k8s-version-862528", mac: "52:54:00:d0:b7:c9", ip: "192.168.61.178"} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.700732   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | skip adding static IP to network mk-old-k8s-version-862528 - found existing host DHCP lease matching {name: "old-k8s-version-862528", mac: "52:54:00:d0:b7:c9", ip: "192.168.61.178"}
	I0410 22:48:25.700745   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | Getting to WaitForSSH function...
	I0410 22:48:25.702929   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.703290   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.703322   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.703490   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | Using SSH client type: external
	I0410 22:48:25.703519   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | Using SSH private key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa (-rw-------)
	I0410 22:48:25.703551   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.178 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0410 22:48:25.703590   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | About to run SSH command:
	I0410 22:48:25.703635   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | exit 0
	I0410 22:48:25.832738   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | SSH cmd err, output: <nil>: 
	I0410 22:48:25.833133   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetConfigRaw
	I0410 22:48:25.833784   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetIP
	I0410 22:48:25.836323   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.836874   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.836908   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.837156   57719 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/config.json ...
	I0410 22:48:25.837472   57719 machine.go:94] provisionDockerMachine start ...
	I0410 22:48:25.837502   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:25.837710   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:25.840159   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.840488   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.840516   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.840593   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:25.840815   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:25.840992   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:25.841134   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:25.841337   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:25.841543   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:25.841556   57719 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 22:48:25.957153   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0410 22:48:25.957189   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetMachineName
	I0410 22:48:25.957438   57719 buildroot.go:166] provisioning hostname "old-k8s-version-862528"
	I0410 22:48:25.957461   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetMachineName
	I0410 22:48:25.957679   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:25.960779   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.961149   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.961184   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.961332   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:25.961546   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:25.961689   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:25.961864   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:25.962020   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:25.962196   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:25.962207   57719 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-862528 && echo "old-k8s-version-862528" | sudo tee /etc/hostname
	I0410 22:48:26.087073   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-862528
	
	I0410 22:48:26.087099   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.089770   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.090109   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.090140   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.090261   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.090446   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.090623   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.090760   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.090951   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:26.091131   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:26.091155   57719 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-862528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-862528/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-862528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:48:26.214422   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:48:26.214462   57719 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:48:26.214490   57719 buildroot.go:174] setting up certificates
	I0410 22:48:26.214498   57719 provision.go:84] configureAuth start
	I0410 22:48:26.214509   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetMachineName
	I0410 22:48:26.214793   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetIP
	I0410 22:48:26.217463   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.217809   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.217850   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.217975   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.219971   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.220235   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.220265   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.220480   57719 provision.go:143] copyHostCerts
	I0410 22:48:26.220526   57719 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:48:26.220542   57719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:48:26.220604   57719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:48:26.220703   57719 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:48:26.220712   57719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:48:26.220736   57719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:48:26.220789   57719 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:48:26.220796   57719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:48:26.220817   57719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:48:26.220864   57719 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-862528 san=[127.0.0.1 192.168.61.178 localhost minikube old-k8s-version-862528]
	I0410 22:48:26.288372   57719 provision.go:177] copyRemoteCerts
	I0410 22:48:26.288445   57719 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:48:26.288468   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.290980   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.291298   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.291339   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.291444   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.291635   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.291809   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.291927   57719 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa Username:docker}
	I0410 22:48:26.379823   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:48:26.405285   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0410 22:48:26.430122   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0410 22:48:26.456124   57719 provision.go:87] duration metric: took 241.614364ms to configureAuth
	I0410 22:48:26.456154   57719 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:48:26.456356   57719 config.go:182] Loaded profile config "old-k8s-version-862528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0410 22:48:26.456480   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.459028   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.459335   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.459366   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.459558   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.459742   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.459888   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.460037   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.460211   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:26.460379   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:26.460413   57719 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:48:26.732588   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:48:26.732614   57719 machine.go:97] duration metric: took 895.122467ms to provisionDockerMachine
	I0410 22:48:26.732627   57719 start.go:293] postStartSetup for "old-k8s-version-862528" (driver="kvm2")
	I0410 22:48:26.732641   57719 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:48:26.732679   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.733014   57719 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:48:26.733044   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.735820   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.736217   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.736244   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.736418   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.736630   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.736840   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.737020   57719 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa Username:docker}
	I0410 22:48:26.823452   57719 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:48:26.827806   57719 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:48:26.827827   57719 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:48:26.827899   57719 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:48:26.828009   57719 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:48:26.828122   57719 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:48:26.837564   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:48:26.862278   57719 start.go:296] duration metric: took 129.638185ms for postStartSetup
	I0410 22:48:26.862325   57719 fix.go:56] duration metric: took 20.389482643s for fixHost
	I0410 22:48:26.862346   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.864911   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.865277   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.865301   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.865419   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.865597   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.865742   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.865872   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.866083   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:26.866283   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:26.866300   57719 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 22:48:26.977317   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712789306.948982315
	
	I0410 22:48:26.977337   57719 fix.go:216] guest clock: 1712789306.948982315
	I0410 22:48:26.977344   57719 fix.go:229] Guest: 2024-04-10 22:48:26.948982315 +0000 UTC Remote: 2024-04-10 22:48:26.862329953 +0000 UTC m=+266.486936912 (delta=86.652362ms)
	I0410 22:48:26.977362   57719 fix.go:200] guest clock delta is within tolerance: 86.652362ms
	I0410 22:48:26.977366   57719 start.go:83] releasing machines lock for "old-k8s-version-862528", held for 20.504554043s
	I0410 22:48:26.977386   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.977653   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetIP
	I0410 22:48:26.980035   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.980376   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.980419   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.980602   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.981224   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.981421   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.981516   57719 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:48:26.981558   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.981645   57719 ssh_runner.go:195] Run: cat /version.json
	I0410 22:48:26.981670   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.984375   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.984568   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.984840   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.984868   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.984953   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.985030   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.985079   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.985118   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.985236   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.985277   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.985374   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.985450   57719 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa Username:docker}
	I0410 22:48:26.985516   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.985635   57719 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa Username:docker}
	I0410 22:48:27.105002   57719 ssh_runner.go:195] Run: systemctl --version
	I0410 22:48:27.111205   57719 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:48:27.261678   57719 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 22:48:27.268336   57719 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:48:27.268423   57719 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:48:27.290099   57719 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0410 22:48:27.290122   57719 start.go:494] detecting cgroup driver to use...
	I0410 22:48:27.290174   57719 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:48:27.308787   57719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:48:27.325557   57719 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:48:27.325611   57719 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:48:27.340859   57719 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:48:27.355570   57719 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:48:27.479670   57719 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:48:27.653364   57719 docker.go:233] disabling docker service ...
	I0410 22:48:27.653424   57719 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:48:27.669775   57719 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:48:27.683654   57719 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:48:27.813212   57719 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:48:27.929620   57719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:48:27.946085   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:48:27.966341   57719 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0410 22:48:27.966404   57719 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:27.978022   57719 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:48:27.978111   57719 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:27.989324   57719 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:28.001429   57719 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:28.012965   57719 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:48:28.024663   57719 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:48:28.034362   57719 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0410 22:48:28.034423   57719 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0410 22:48:28.048740   57719 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:48:28.060698   57719 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:48:28.188526   57719 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:48:28.348442   57719 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 22:48:28.348523   57719 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 22:48:28.353501   57719 start.go:562] Will wait 60s for crictl version
	I0410 22:48:28.353566   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:28.357486   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 22:48:28.391138   57719 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 22:48:28.391221   57719 ssh_runner.go:195] Run: crio --version
	I0410 22:48:28.421399   57719 ssh_runner.go:195] Run: crio --version
	I0410 22:48:28.455851   57719 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0410 22:48:28.457534   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetIP
	I0410 22:48:28.460913   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:28.461297   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:28.461323   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:28.461558   57719 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0410 22:48:28.466450   57719 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:48:28.480549   57719 kubeadm.go:877] updating cluster {Name:old-k8s-version-862528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-862528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.178 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 22:48:28.480671   57719 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0410 22:48:28.480775   57719 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:48:28.536971   57719 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0410 22:48:28.537034   57719 ssh_runner.go:195] Run: which lz4
	I0410 22:48:28.541757   57719 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0410 22:48:28.546381   57719 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0410 22:48:28.546413   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0410 22:48:30.411805   57719 crio.go:462] duration metric: took 1.870076139s to copy over tarball
	I0410 22:48:30.411900   57719 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0410 22:48:28.229217   58186 main.go:141] libmachine: (embed-certs-706500) Waiting to get IP...
	I0410 22:48:28.230257   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:28.230673   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:28.230724   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:28.230643   59155 retry.go:31] will retry after 262.296498ms: waiting for machine to come up
	I0410 22:48:28.494117   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:28.494631   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:28.494660   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:28.494584   59155 retry.go:31] will retry after 237.287095ms: waiting for machine to come up
	I0410 22:48:28.733250   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:28.733795   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:28.733817   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:28.733755   59155 retry.go:31] will retry after 387.436239ms: waiting for machine to come up
	I0410 22:48:29.123585   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:29.124128   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:29.124163   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:29.124073   59155 retry.go:31] will retry after 428.418916ms: waiting for machine to come up
	I0410 22:48:29.554781   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:29.555244   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:29.555285   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:29.555235   59155 retry.go:31] will retry after 683.194159ms: waiting for machine to come up
	I0410 22:48:30.239955   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:30.240385   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:30.240463   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:30.240365   59155 retry.go:31] will retry after 764.240086ms: waiting for machine to come up
	I0410 22:48:31.006294   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:31.006789   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:31.006816   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:31.006750   59155 retry.go:31] will retry after 1.113674235s: waiting for machine to come up
	I0410 22:48:33.358026   57719 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.946092727s)
	I0410 22:48:33.358059   57719 crio.go:469] duration metric: took 2.946222933s to extract the tarball
	I0410 22:48:33.358069   57719 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0410 22:48:33.402924   57719 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:48:33.441006   57719 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0410 22:48:33.441033   57719 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0410 22:48:33.441090   57719 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:48:33.441142   57719 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.441203   57719 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.441210   57719 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.441318   57719 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0410 22:48:33.441339   57719 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.441375   57719 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.441395   57719 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.442645   57719 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.442667   57719 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.442706   57719 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.442717   57719 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0410 22:48:33.442796   57719 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.442807   57719 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.442814   57719 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:48:33.442866   57719 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.651119   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.652634   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.665548   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.669396   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.672510   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.674137   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0410 22:48:33.686915   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.756592   57719 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0410 22:48:33.756639   57719 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.756696   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.756696   57719 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0410 22:48:33.756789   57719 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.756810   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867043   57719 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0410 22:48:33.867061   57719 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0410 22:48:33.867090   57719 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.867091   57719 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.867135   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867166   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867185   57719 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0410 22:48:33.867220   57719 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.867252   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867261   57719 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0410 22:48:33.867303   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.867311   57719 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0410 22:48:33.867355   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867359   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.867286   57719 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0410 22:48:33.867452   57719 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.867481   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.871719   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.881086   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.964827   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.964854   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0410 22:48:33.964932   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0410 22:48:33.964948   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.976084   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0410 22:48:33.976155   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0410 22:48:33.976205   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0410 22:48:34.011460   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0410 22:48:34.038739   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0410 22:48:34.038739   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0410 22:48:34.289751   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:48:34.429542   57719 cache_images.go:92] duration metric: took 988.487885ms to LoadCachedImages
	W0410 22:48:34.429636   57719 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0410 22:48:34.429665   57719 kubeadm.go:928] updating node { 192.168.61.178 8443 v1.20.0 crio true true} ...
	I0410 22:48:34.429782   57719 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-862528 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.178
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-862528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 22:48:34.429870   57719 ssh_runner.go:195] Run: crio config
	I0410 22:48:34.478794   57719 cni.go:84] Creating CNI manager for ""
	I0410 22:48:34.478829   57719 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:48:34.478845   57719 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 22:48:34.478868   57719 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.178 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-862528 NodeName:old-k8s-version-862528 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.178"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.178 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0410 22:48:34.479065   57719 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.178
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-862528"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.178
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.178"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 22:48:34.479147   57719 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0410 22:48:34.489950   57719 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 22:48:34.490007   57719 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 22:48:34.500261   57719 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0410 22:48:34.517530   57719 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0410 22:48:34.534814   57719 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0410 22:48:34.552669   57719 ssh_runner.go:195] Run: grep 192.168.61.178	control-plane.minikube.internal$ /etc/hosts
	I0410 22:48:34.556612   57719 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.178	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:48:34.569643   57719 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:48:34.700791   57719 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:48:34.719682   57719 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528 for IP: 192.168.61.178
	I0410 22:48:34.719703   57719 certs.go:194] generating shared ca certs ...
	I0410 22:48:34.719722   57719 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:48:34.719900   57719 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 22:48:34.719951   57719 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 22:48:34.719965   57719 certs.go:256] generating profile certs ...
	I0410 22:48:34.720091   57719 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/client.key
	I0410 22:48:34.720155   57719 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.key.a46c310c
	I0410 22:48:34.720199   57719 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/proxy-client.key
	I0410 22:48:34.720337   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 22:48:34.720376   57719 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 22:48:34.720386   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 22:48:34.720438   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 22:48:34.720472   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 22:48:34.720502   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 22:48:34.720557   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:48:34.721238   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 22:48:34.769810   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 22:48:34.805397   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 22:48:34.846743   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 22:48:34.888720   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0410 22:48:34.915958   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0410 22:48:34.962182   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 22:48:34.992444   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0410 22:48:35.023525   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 22:48:35.051098   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 22:48:35.077305   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 22:48:35.102172   57719 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 22:48:35.121381   57719 ssh_runner.go:195] Run: openssl version
	I0410 22:48:35.127869   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 22:48:35.140056   57719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:35.145172   57719 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:35.145242   57719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:35.152081   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 22:48:35.164621   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 22:48:35.176511   57719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 22:48:35.182164   57719 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:48:35.182217   57719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 22:48:35.188968   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 22:48:35.201491   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 22:48:35.213468   57719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 22:48:35.218519   57719 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:48:35.218586   57719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 22:48:35.224872   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 22:48:35.236964   57719 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:48:35.242262   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0410 22:48:35.249245   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0410 22:48:35.256301   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0410 22:48:35.263359   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0410 22:48:35.270166   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0410 22:48:35.276953   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0410 22:48:35.283529   57719 kubeadm.go:391] StartCluster: {Name:old-k8s-version-862528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-862528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.178 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:48:35.283643   57719 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 22:48:35.283700   57719 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:48:35.328461   57719 cri.go:89] found id: ""
	I0410 22:48:35.328532   57719 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0410 22:48:35.340207   57719 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0410 22:48:35.340235   57719 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0410 22:48:35.340245   57719 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0410 22:48:35.340293   57719 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0410 22:48:35.351212   57719 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0410 22:48:35.352189   57719 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-862528" does not appear in /home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:48:35.352989   57719 kubeconfig.go:62] /home/jenkins/minikube-integration/18610-5679/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-862528" cluster setting kubeconfig missing "old-k8s-version-862528" context setting]
	I0410 22:48:35.353956   57719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/kubeconfig: {Name:mkd6f498bec10d7b0d2291f83f5e27766227bc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:48:32.122313   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:32.122773   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:32.122816   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:32.122717   59155 retry.go:31] will retry after 1.052378413s: waiting for machine to come up
	I0410 22:48:33.176207   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:33.176621   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:33.176665   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:33.176568   59155 retry.go:31] will retry after 1.548572633s: waiting for machine to come up
	I0410 22:48:34.726554   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:34.726992   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:34.727020   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:34.726938   59155 retry.go:31] will retry after 1.800911659s: waiting for machine to come up
	I0410 22:48:36.529629   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:36.530133   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:36.530164   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:36.530085   59155 retry.go:31] will retry after 2.434743044s: waiting for machine to come up
	I0410 22:48:35.428830   57719 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0410 22:48:35.479813   57719 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.178
	I0410 22:48:35.479853   57719 kubeadm.go:1154] stopping kube-system containers ...
	I0410 22:48:35.479882   57719 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0410 22:48:35.479940   57719 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:48:35.520506   57719 cri.go:89] found id: ""
	I0410 22:48:35.520577   57719 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0410 22:48:35.538167   57719 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:48:35.548571   57719 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:48:35.548600   57719 kubeadm.go:156] found existing configuration files:
	
	I0410 22:48:35.548662   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:48:35.558559   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:48:35.558612   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:48:35.568950   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:48:35.578644   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:48:35.578712   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:48:35.589075   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:48:35.600265   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:48:35.600321   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:48:35.611459   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:48:35.621712   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:48:35.621785   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:48:35.632133   57719 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:48:35.643494   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:35.775309   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:37.133286   57719 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.35793645s)
	I0410 22:48:37.133334   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:37.368687   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:37.497136   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:37.584652   57719 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:48:37.584744   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:38.085293   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:38.585489   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:39.084919   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:39.584951   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:40.085144   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:38.966866   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:38.967360   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:38.967383   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:38.967339   59155 retry.go:31] will retry after 3.219302627s: waiting for machine to come up
	I0410 22:48:40.585356   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:41.084839   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:41.585434   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:42.085797   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:42.585578   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:43.085621   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:43.585581   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:44.085251   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:44.584785   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:45.085394   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:46.409467   58701 start.go:364] duration metric: took 1m58.907071516s to acquireMachinesLock for "default-k8s-diff-port-519831"
	I0410 22:48:46.409536   58701 start.go:96] Skipping create...Using existing machine configuration
	I0410 22:48:46.409557   58701 fix.go:54] fixHost starting: 
	I0410 22:48:46.410030   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:48:46.410080   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:48:46.427877   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35587
	I0410 22:48:46.428357   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:48:46.428836   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:48:46.428858   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:48:46.429163   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:48:46.429354   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:48:46.429494   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetState
	I0410 22:48:46.431151   58701 fix.go:112] recreateIfNeeded on default-k8s-diff-port-519831: state=Stopped err=<nil>
	I0410 22:48:46.431192   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	W0410 22:48:46.431372   58701 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 22:48:46.433597   58701 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-519831" ...
	I0410 22:48:42.187835   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:42.188266   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:42.188305   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:42.188191   59155 retry.go:31] will retry after 2.924293511s: waiting for machine to come up
	I0410 22:48:45.113669   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.114211   58186 main.go:141] libmachine: (embed-certs-706500) Found IP for machine: 192.168.39.10
	I0410 22:48:45.114229   58186 main.go:141] libmachine: (embed-certs-706500) Reserving static IP address...
	I0410 22:48:45.114243   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has current primary IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.114685   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "embed-certs-706500", mac: "52:54:00:36:c4:8c", ip: "192.168.39.10"} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.114711   58186 main.go:141] libmachine: (embed-certs-706500) DBG | skip adding static IP to network mk-embed-certs-706500 - found existing host DHCP lease matching {name: "embed-certs-706500", mac: "52:54:00:36:c4:8c", ip: "192.168.39.10"}
	I0410 22:48:45.114721   58186 main.go:141] libmachine: (embed-certs-706500) Reserved static IP address: 192.168.39.10
	I0410 22:48:45.114728   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Getting to WaitForSSH function...
	I0410 22:48:45.114743   58186 main.go:141] libmachine: (embed-certs-706500) Waiting for SSH to be available...
	I0410 22:48:45.116708   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.116963   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.117007   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.117139   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Using SSH client type: external
	I0410 22:48:45.117167   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Using SSH private key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa (-rw-------)
	I0410 22:48:45.117198   58186 main.go:141] libmachine: (embed-certs-706500) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0410 22:48:45.117224   58186 main.go:141] libmachine: (embed-certs-706500) DBG | About to run SSH command:
	I0410 22:48:45.117236   58186 main.go:141] libmachine: (embed-certs-706500) DBG | exit 0
	I0410 22:48:45.240518   58186 main.go:141] libmachine: (embed-certs-706500) DBG | SSH cmd err, output: <nil>: 
	I0410 22:48:45.240843   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetConfigRaw
	I0410 22:48:45.241532   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetIP
	I0410 22:48:45.243908   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.244293   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.244317   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.244576   58186 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/config.json ...
	I0410 22:48:45.244775   58186 machine.go:94] provisionDockerMachine start ...
	I0410 22:48:45.244799   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:45.245004   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.247248   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.247639   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.247665   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.247859   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:45.248039   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.248217   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.248375   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:45.248543   58186 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:45.248746   58186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0410 22:48:45.248766   58186 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 22:48:45.357146   58186 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0410 22:48:45.357177   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetMachineName
	I0410 22:48:45.357428   58186 buildroot.go:166] provisioning hostname "embed-certs-706500"
	I0410 22:48:45.357447   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetMachineName
	I0410 22:48:45.357624   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.360299   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.360700   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.360796   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.360838   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:45.361049   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.361183   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.361367   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:45.361537   58186 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:45.361702   58186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0410 22:48:45.361716   58186 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-706500 && echo "embed-certs-706500" | sudo tee /etc/hostname
	I0410 22:48:45.487121   58186 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-706500
	
	I0410 22:48:45.487160   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.490242   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.490597   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.490625   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.490805   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:45.491004   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.491204   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.491359   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:45.491576   58186 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:45.491792   58186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0410 22:48:45.491824   58186 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-706500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-706500/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-706500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:48:45.606186   58186 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:48:45.606212   58186 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:48:45.606246   58186 buildroot.go:174] setting up certificates
	I0410 22:48:45.606257   58186 provision.go:84] configureAuth start
	I0410 22:48:45.606269   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetMachineName
	I0410 22:48:45.606594   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetIP
	I0410 22:48:45.609459   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.609893   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.609932   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.610134   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.612631   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.612945   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.612979   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.613144   58186 provision.go:143] copyHostCerts
	I0410 22:48:45.613193   58186 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:48:45.613207   58186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:48:45.613262   58186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:48:45.613378   58186 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:48:45.613393   58186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:48:45.613427   58186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:48:45.613495   58186 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:48:45.613505   58186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:48:45.613529   58186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:48:45.613592   58186 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.embed-certs-706500 san=[127.0.0.1 192.168.39.10 embed-certs-706500 localhost minikube]
	I0410 22:48:45.737049   58186 provision.go:177] copyRemoteCerts
	I0410 22:48:45.737105   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:48:45.737129   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.739712   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.740060   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.740089   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.740347   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:45.740589   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.740763   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:45.740957   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:48:45.828677   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:48:45.854080   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0410 22:48:45.878704   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0410 22:48:45.902611   58186 provision.go:87] duration metric: took 296.343353ms to configureAuth
	I0410 22:48:45.902640   58186 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:48:45.902879   58186 config.go:182] Loaded profile config "embed-certs-706500": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:48:45.902962   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.905588   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.905950   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.905972   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.906165   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:45.906360   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.906473   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.906561   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:45.906725   58186 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:45.906887   58186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0410 22:48:45.906911   58186 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:48:46.172772   58186 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:48:46.172807   58186 machine.go:97] duration metric: took 928.014662ms to provisionDockerMachine
	I0410 22:48:46.172823   58186 start.go:293] postStartSetup for "embed-certs-706500" (driver="kvm2")
	I0410 22:48:46.172836   58186 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:48:46.172877   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:46.173197   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:48:46.173223   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:46.176113   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.176465   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:46.176495   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.176679   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:46.176896   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:46.177118   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:46.177328   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:48:46.260470   58186 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:48:46.265003   58186 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:48:46.265030   58186 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:48:46.265088   58186 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:48:46.265158   58186 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:48:46.265241   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:48:46.274931   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:48:46.300036   58186 start.go:296] duration metric: took 127.199834ms for postStartSetup
	I0410 22:48:46.300082   58186 fix.go:56] duration metric: took 19.322550114s for fixHost
	I0410 22:48:46.300108   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:46.302945   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.303252   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:46.303279   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.303479   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:46.303700   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:46.303861   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:46.303990   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:46.304140   58186 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:46.304308   58186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0410 22:48:46.304318   58186 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 22:48:46.409294   58186 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712789326.385898055
	
	I0410 22:48:46.409317   58186 fix.go:216] guest clock: 1712789326.385898055
	I0410 22:48:46.409327   58186 fix.go:229] Guest: 2024-04-10 22:48:46.385898055 +0000 UTC Remote: 2024-04-10 22:48:46.300087658 +0000 UTC m=+229.287947250 (delta=85.810397ms)
	I0410 22:48:46.409352   58186 fix.go:200] guest clock delta is within tolerance: 85.810397ms
	I0410 22:48:46.409360   58186 start.go:83] releasing machines lock for "embed-certs-706500", held for 19.431860062s
	I0410 22:48:46.409389   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:46.409752   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetIP
	I0410 22:48:46.412201   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.412616   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:46.412651   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.412790   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:46.413361   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:46.413559   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:46.413617   58186 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:48:46.413665   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:46.413796   58186 ssh_runner.go:195] Run: cat /version.json
	I0410 22:48:46.413831   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:46.416879   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.417224   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:46.417248   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.417268   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.417428   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:46.417630   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:46.417811   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:46.417835   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:46.417858   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.417938   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:48:46.418030   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:46.418154   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:46.418284   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:46.418463   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:48:46.529204   58186 ssh_runner.go:195] Run: systemctl --version
	I0410 22:48:46.535396   58186 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:48:46.681100   58186 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 22:48:46.687278   58186 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:48:46.687340   58186 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:48:46.703105   58186 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0410 22:48:46.703128   58186 start.go:494] detecting cgroup driver to use...
	I0410 22:48:46.703191   58186 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:48:46.719207   58186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:48:46.733444   58186 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:48:46.733509   58186 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:48:46.747369   58186 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:48:46.762231   58186 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:48:46.874897   58186 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:48:47.023672   58186 docker.go:233] disabling docker service ...
	I0410 22:48:47.023749   58186 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:48:47.038963   58186 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:48:47.053827   58186 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:48:46.435268   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Start
	I0410 22:48:46.435498   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Ensuring networks are active...
	I0410 22:48:46.436266   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Ensuring network default is active
	I0410 22:48:46.436691   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Ensuring network mk-default-k8s-diff-port-519831 is active
	I0410 22:48:46.437163   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Getting domain xml...
	I0410 22:48:46.437799   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Creating domain...
	I0410 22:48:47.206641   58186 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:48:47.363331   58186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:48:47.380657   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:48:47.402234   58186 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0410 22:48:47.402306   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.419356   58186 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:48:47.419417   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.435320   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.450812   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.462588   58186 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:48:47.474323   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.494156   58186 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.515195   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.526148   58186 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:48:47.536045   58186 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0410 22:48:47.536106   58186 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0410 22:48:47.549556   58186 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:48:47.567236   58186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:48:47.702628   58186 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:48:47.848908   58186 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 22:48:47.849000   58186 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 22:48:47.854126   58186 start.go:562] Will wait 60s for crictl version
	I0410 22:48:47.854191   58186 ssh_runner.go:195] Run: which crictl
	I0410 22:48:47.858095   58186 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 22:48:47.897714   58186 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 22:48:47.897805   58186 ssh_runner.go:195] Run: crio --version
	I0410 22:48:47.927597   58186 ssh_runner.go:195] Run: crio --version
	I0410 22:48:47.958357   58186 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0410 22:48:45.584769   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:46.085396   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:46.585857   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:47.085186   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:47.585668   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:48.085585   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:48.585617   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:49.085227   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:49.585626   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:50.084900   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:47.959811   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetIP
	I0410 22:48:47.962805   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:47.963246   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:47.963276   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:47.963510   58186 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0410 22:48:47.967753   58186 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:48:47.981154   58186 kubeadm.go:877] updating cluster {Name:embed-certs-706500 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-706500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 22:48:47.981258   58186 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 22:48:47.981298   58186 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:48:48.018208   58186 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0410 22:48:48.018274   58186 ssh_runner.go:195] Run: which lz4
	I0410 22:48:48.023613   58186 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0410 22:48:48.029036   58186 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0410 22:48:48.029063   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0410 22:48:49.637729   58186 crio.go:462] duration metric: took 1.61414003s to copy over tarball
	I0410 22:48:49.637796   58186 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0410 22:48:52.046454   58186 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.408634496s)
	I0410 22:48:52.046482   58186 crio.go:469] duration metric: took 2.408728343s to extract the tarball
	I0410 22:48:52.046489   58186 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0410 22:48:47.701355   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting to get IP...
	I0410 22:48:47.702406   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:47.702994   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:47.703067   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:47.702962   59362 retry.go:31] will retry after 292.834608ms: waiting for machine to come up
	I0410 22:48:47.997294   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:47.997757   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:47.997785   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:47.997701   59362 retry.go:31] will retry after 341.35168ms: waiting for machine to come up
	I0410 22:48:48.340842   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:48.341347   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:48.341379   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:48.341279   59362 retry.go:31] will retry after 438.041848ms: waiting for machine to come up
	I0410 22:48:48.780565   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:48.781092   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:48.781116   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:48.781038   59362 retry.go:31] will retry after 557.770882ms: waiting for machine to come up
	I0410 22:48:49.340858   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:49.341330   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:49.341354   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:49.341282   59362 retry.go:31] will retry after 637.316206ms: waiting for machine to come up
	I0410 22:48:49.980256   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:49.980737   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:49.980761   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:49.980696   59362 retry.go:31] will retry after 909.873955ms: waiting for machine to come up
	I0410 22:48:50.891776   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:50.892197   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:50.892229   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:50.892147   59362 retry.go:31] will retry after 745.06949ms: waiting for machine to come up
	I0410 22:48:51.638436   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:51.638907   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:51.638933   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:51.638854   59362 retry.go:31] will retry after 1.060037191s: waiting for machine to come up
	I0410 22:48:50.585691   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:51.085669   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:51.585308   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:52.085393   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:52.585619   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:53.085643   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:53.585076   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:54.085251   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:54.585027   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:55.085629   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:52.087135   58186 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:48:52.139368   58186 crio.go:514] all images are preloaded for cri-o runtime.
	I0410 22:48:52.139389   58186 cache_images.go:84] Images are preloaded, skipping loading
	I0410 22:48:52.139397   58186 kubeadm.go:928] updating node { 192.168.39.10 8443 v1.29.3 crio true true} ...
	I0410 22:48:52.139535   58186 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-706500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-706500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 22:48:52.139629   58186 ssh_runner.go:195] Run: crio config
	I0410 22:48:52.193347   58186 cni.go:84] Creating CNI manager for ""
	I0410 22:48:52.193375   58186 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:48:52.193390   58186 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 22:48:52.193429   58186 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-706500 NodeName:embed-certs-706500 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0410 22:48:52.193606   58186 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-706500"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 22:48:52.193686   58186 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0410 22:48:52.206450   58186 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 22:48:52.206507   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 22:48:52.218898   58186 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0410 22:48:52.239285   58186 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0410 22:48:52.257083   58186 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0410 22:48:52.275448   58186 ssh_runner.go:195] Run: grep 192.168.39.10	control-plane.minikube.internal$ /etc/hosts
	I0410 22:48:52.279486   58186 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:48:52.293308   58186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:48:52.428424   58186 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:48:52.446713   58186 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500 for IP: 192.168.39.10
	I0410 22:48:52.446738   58186 certs.go:194] generating shared ca certs ...
	I0410 22:48:52.446759   58186 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:48:52.446937   58186 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 22:48:52.446980   58186 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 22:48:52.446990   58186 certs.go:256] generating profile certs ...
	I0410 22:48:52.447059   58186 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/client.key
	I0410 22:48:52.447124   58186 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/apiserver.key.f3045f1a
	I0410 22:48:52.447156   58186 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/proxy-client.key
	I0410 22:48:52.447294   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 22:48:52.447328   58186 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 22:48:52.447335   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 22:48:52.447354   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 22:48:52.447374   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 22:48:52.447405   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 22:48:52.447457   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:48:52.448166   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 22:48:52.481862   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 22:48:52.530983   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 22:48:52.572191   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 22:48:52.614466   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0410 22:48:52.644331   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0410 22:48:52.672811   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 22:48:52.698376   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0410 22:48:52.723998   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 22:48:52.749405   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 22:48:52.777529   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 22:48:52.803663   58186 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 22:48:52.822234   58186 ssh_runner.go:195] Run: openssl version
	I0410 22:48:52.830835   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 22:48:52.843425   58186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 22:48:52.848384   58186 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:48:52.848444   58186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 22:48:52.854869   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 22:48:52.867228   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 22:48:52.879319   58186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 22:48:52.884241   58186 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:48:52.884324   58186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 22:48:52.890349   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 22:48:52.902398   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 22:48:52.913996   58186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:52.918757   58186 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:52.918824   58186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:52.924669   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 22:48:52.936581   58186 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:48:52.941242   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0410 22:48:52.947526   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0410 22:48:52.953939   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0410 22:48:52.960447   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0410 22:48:52.966829   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0410 22:48:52.973148   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0410 22:48:52.979557   58186 kubeadm.go:391] StartCluster: {Name:embed-certs-706500 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-706500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:48:52.979669   58186 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 22:48:52.979744   58186 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:48:53.018394   58186 cri.go:89] found id: ""
	I0410 22:48:53.018479   58186 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0410 22:48:53.030088   58186 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0410 22:48:53.030112   58186 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0410 22:48:53.030118   58186 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0410 22:48:53.030184   58186 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0410 22:48:53.041035   58186 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0410 22:48:53.042312   58186 kubeconfig.go:125] found "embed-certs-706500" server: "https://192.168.39.10:8443"
	I0410 22:48:53.044306   58186 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0410 22:48:53.054911   58186 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.10
	I0410 22:48:53.054948   58186 kubeadm.go:1154] stopping kube-system containers ...
	I0410 22:48:53.054974   58186 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0410 22:48:53.055020   58186 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:48:53.093035   58186 cri.go:89] found id: ""
	I0410 22:48:53.093109   58186 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0410 22:48:53.111257   58186 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:48:53.122098   58186 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:48:53.122125   58186 kubeadm.go:156] found existing configuration files:
	
	I0410 22:48:53.122176   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:48:53.133513   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:48:53.133587   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:48:53.144275   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:48:53.154921   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:48:53.155000   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:48:53.165604   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:48:53.175520   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:48:53.175582   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:48:53.186094   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:48:53.196086   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:48:53.196156   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:48:53.206564   58186 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:48:53.217180   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:53.336883   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:54.151708   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:54.367165   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:54.457694   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:54.572579   58186 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:48:54.572693   58186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:55.073196   58186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:55.572865   58186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:55.595374   58186 api_server.go:72] duration metric: took 1.022777759s to wait for apiserver process to appear ...
	I0410 22:48:55.595403   58186 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:48:55.595424   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:48:52.701137   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:52.701574   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:52.701606   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:52.701529   59362 retry.go:31] will retry after 1.792719263s: waiting for machine to come up
	I0410 22:48:54.496380   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:54.496793   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:54.496823   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:54.496740   59362 retry.go:31] will retry after 2.321115222s: waiting for machine to come up
	I0410 22:48:56.819654   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:56.820107   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:56.820140   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:56.820072   59362 retry.go:31] will retry after 2.57309135s: waiting for machine to come up
	I0410 22:48:55.585506   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:56.084892   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:56.585876   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:57.085775   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:57.585260   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:58.085635   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:58.585588   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:59.085661   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:59.585663   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:00.085635   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:58.843447   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0410 22:48:58.843487   58186 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0410 22:48:58.843504   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:48:58.962381   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:48:58.962431   58186 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:48:59.095611   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:48:59.100754   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:48:59.100781   58186 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:48:59.595968   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:48:59.606936   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:48:59.606977   58186 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:49:00.096182   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:49:00.106346   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:49:00.106388   58186 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:49:00.595923   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:49:00.600197   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I0410 22:49:00.609220   58186 api_server.go:141] control plane version: v1.29.3
	I0410 22:49:00.609246   58186 api_server.go:131] duration metric: took 5.013835577s to wait for apiserver health ...
	I0410 22:49:00.609256   58186 cni.go:84] Creating CNI manager for ""
	I0410 22:49:00.609263   58186 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:49:00.611220   58186 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0410 22:49:00.612765   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0410 22:49:00.625567   58186 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0410 22:49:00.648581   58186 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:49:00.657652   58186 system_pods.go:59] 8 kube-system pods found
	I0410 22:49:00.657688   58186 system_pods.go:61] "coredns-76f75df574-j4kj8" [1986e6b6-e6c7-4212-bdd5-10360a0b897c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0410 22:49:00.657696   58186 system_pods.go:61] "etcd-embed-certs-706500" [acbf9245-d4f8-4fa6-88a7-4f891f9f8403] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0410 22:49:00.657704   58186 system_pods.go:61] "kube-apiserver-embed-certs-706500" [b9c79d1d-f571-4ed8-a68f-512e8a2a1705] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0410 22:49:00.657709   58186 system_pods.go:61] "kube-controller-manager-embed-certs-706500" [d229b85d-9a8d-4cd0-ac48-a6aea3769581] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0410 22:49:00.657715   58186 system_pods.go:61] "kube-proxy-8kzff" [ce35a33f-1697-44a7-ad64-83895236bc6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0410 22:49:00.657720   58186 system_pods.go:61] "kube-scheduler-embed-certs-706500" [72c68a6c-beba-48a5-937b-51c40aab0386] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0410 22:49:00.657726   58186 system_pods.go:61] "metrics-server-57f55c9bc5-4r9pl" [40a91fc1-9e0a-4bcc-a2e9-65e9f2d2b960] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:49:00.657733   58186 system_pods.go:61] "storage-provisioner" [10f7637e-e6e0-4f04-b1eb-ac3bd205064f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0410 22:49:00.657742   58186 system_pods.go:74] duration metric: took 9.141859ms to wait for pod list to return data ...
	I0410 22:49:00.657752   58186 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:49:00.662255   58186 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:49:00.662300   58186 node_conditions.go:123] node cpu capacity is 2
	I0410 22:49:00.662315   58186 node_conditions.go:105] duration metric: took 4.553643ms to run NodePressure ...
	I0410 22:49:00.662338   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:00.957923   58186 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0410 22:49:00.962553   58186 kubeadm.go:733] kubelet initialised
	I0410 22:49:00.962575   58186 kubeadm.go:734] duration metric: took 4.616848ms waiting for restarted kubelet to initialise ...
	I0410 22:49:00.962585   58186 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:49:00.968387   58186 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-j4kj8" in "kube-system" namespace to be "Ready" ...
	I0410 22:48:59.395416   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:59.395864   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:59.395893   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:59.395819   59362 retry.go:31] will retry after 2.378137008s: waiting for machine to come up
	I0410 22:49:01.776037   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:01.776587   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:49:01.776641   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:49:01.776526   59362 retry.go:31] will retry after 4.360839049s: waiting for machine to come up
	I0410 22:49:00.585234   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:01.084884   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:01.585066   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:02.085697   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:02.585573   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:03.085552   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:03.585521   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:04.084919   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:04.584802   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:05.085266   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:02.975009   58186 pod_ready.go:102] pod "coredns-76f75df574-j4kj8" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:04.976854   58186 pod_ready.go:102] pod "coredns-76f75df574-j4kj8" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:06.141509   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.142008   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Found IP for machine: 192.168.72.170
	I0410 22:49:06.142037   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has current primary IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.142047   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Reserving static IP address...
	I0410 22:49:06.142422   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Reserved static IP address: 192.168.72.170
	I0410 22:49:06.142451   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for SSH to be available...
	I0410 22:49:06.142476   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-519831", mac: "52:54:00:dc:67:d5", ip: "192.168.72.170"} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.142499   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | skip adding static IP to network mk-default-k8s-diff-port-519831 - found existing host DHCP lease matching {name: "default-k8s-diff-port-519831", mac: "52:54:00:dc:67:d5", ip: "192.168.72.170"}
	I0410 22:49:06.142518   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Getting to WaitForSSH function...
	I0410 22:49:06.144878   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.145206   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.145238   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.145326   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Using SSH client type: external
	I0410 22:49:06.145365   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Using SSH private key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa (-rw-------)
	I0410 22:49:06.145401   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.170 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0410 22:49:06.145421   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | About to run SSH command:
	I0410 22:49:06.145438   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | exit 0
	I0410 22:49:06.272546   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | SSH cmd err, output: <nil>: 
	I0410 22:49:06.272919   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetConfigRaw
	I0410 22:49:06.273605   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetIP
	I0410 22:49:06.276234   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.276610   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.276644   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.276851   58701 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/config.json ...
	I0410 22:49:06.277100   58701 machine.go:94] provisionDockerMachine start ...
	I0410 22:49:06.277127   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:06.277400   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:06.279729   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.280107   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.280146   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.280295   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:06.280480   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.280658   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.280794   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:06.280939   58701 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:06.281121   58701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0410 22:49:06.281138   58701 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 22:49:06.385219   58701 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0410 22:49:06.385254   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetMachineName
	I0410 22:49:06.385498   58701 buildroot.go:166] provisioning hostname "default-k8s-diff-port-519831"
	I0410 22:49:06.385527   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetMachineName
	I0410 22:49:06.385716   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:06.388422   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.388922   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.388963   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.389072   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:06.389292   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.389462   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.389600   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:06.389751   58701 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:06.389924   58701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0410 22:49:06.389938   58701 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-519831 && echo "default-k8s-diff-port-519831" | sudo tee /etc/hostname
	I0410 22:49:06.507221   58701 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-519831
	
	I0410 22:49:06.507252   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:06.509837   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.510179   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.510225   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.510385   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:06.510561   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.510736   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.510880   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:06.511040   58701 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:06.511236   58701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0410 22:49:06.511262   58701 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-519831' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-519831/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-519831' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:49:06.626097   58701 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:49:06.626129   58701 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:49:06.626153   58701 buildroot.go:174] setting up certificates
	I0410 22:49:06.626163   58701 provision.go:84] configureAuth start
	I0410 22:49:06.626173   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetMachineName
	I0410 22:49:06.626499   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetIP
	I0410 22:49:06.629067   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.629412   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.629450   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.629559   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:06.632132   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.632517   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.632548   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.632674   58701 provision.go:143] copyHostCerts
	I0410 22:49:06.632734   58701 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:49:06.632755   58701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:49:06.632822   58701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:49:06.633021   58701 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:49:06.633037   58701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:49:06.633078   58701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:49:06.633179   58701 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:49:06.633191   58701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:49:06.633223   58701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:49:06.633295   58701 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-519831 san=[127.0.0.1 192.168.72.170 default-k8s-diff-port-519831 localhost minikube]
	I0410 22:49:06.835016   58701 provision.go:177] copyRemoteCerts
	I0410 22:49:06.835077   58701 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:49:06.835104   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:06.837769   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.838124   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.838152   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.838327   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:06.838519   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.838669   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:06.838808   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:06.921929   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:49:06.947855   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0410 22:49:06.972865   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0410 22:49:06.999630   58701 provision.go:87] duration metric: took 373.45654ms to configureAuth
	I0410 22:49:06.999658   58701 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:49:06.999872   58701 config.go:182] Loaded profile config "default-k8s-diff-port-519831": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:49:06.999942   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:07.003015   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.003418   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.003452   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.003623   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:07.003793   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.003946   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.004062   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:07.004208   58701 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:07.004425   58701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0410 22:49:07.004448   58701 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:49:07.273568   58701 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:49:07.273601   58701 machine.go:97] duration metric: took 996.483382ms to provisionDockerMachine
	I0410 22:49:07.273618   58701 start.go:293] postStartSetup for "default-k8s-diff-port-519831" (driver="kvm2")
	I0410 22:49:07.273634   58701 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:49:07.273660   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:07.274009   58701 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:49:07.274040   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:07.276736   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.277132   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.277155   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.277354   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:07.277537   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.277740   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:07.277891   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:07.361056   58701 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:49:07.365729   58701 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:49:07.365759   58701 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:49:07.365834   58701 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:49:07.365935   58701 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:49:07.366064   58701 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:49:07.376754   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:49:07.509384   57270 start.go:364] duration metric: took 56.035567079s to acquireMachinesLock for "no-preload-646133"
	I0410 22:49:07.509424   57270 start.go:96] Skipping create...Using existing machine configuration
	I0410 22:49:07.509432   57270 fix.go:54] fixHost starting: 
	I0410 22:49:07.509837   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:07.509872   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:07.526882   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46721
	I0410 22:49:07.527337   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:07.527780   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:49:07.527801   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:07.528077   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:07.528238   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:07.528366   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetState
	I0410 22:49:07.529732   57270 fix.go:112] recreateIfNeeded on no-preload-646133: state=Stopped err=<nil>
	I0410 22:49:07.529755   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	W0410 22:49:07.529878   57270 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 22:49:07.531875   57270 out.go:177] * Restarting existing kvm2 VM for "no-preload-646133" ...
	I0410 22:49:07.402691   58701 start.go:296] duration metric: took 129.059293ms for postStartSetup
	I0410 22:49:07.402731   58701 fix.go:56] duration metric: took 20.99318672s for fixHost
	I0410 22:49:07.402751   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:07.405634   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.405955   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.405996   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.406161   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:07.406378   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.406537   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.406647   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:07.406826   58701 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:07.407062   58701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0410 22:49:07.407079   58701 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 22:49:07.509210   58701 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712789347.471050157
	
	I0410 22:49:07.509233   58701 fix.go:216] guest clock: 1712789347.471050157
	I0410 22:49:07.509241   58701 fix.go:229] Guest: 2024-04-10 22:49:07.471050157 +0000 UTC Remote: 2024-04-10 22:49:07.402735415 +0000 UTC m=+140.054227768 (delta=68.314742ms)
	I0410 22:49:07.509287   58701 fix.go:200] guest clock delta is within tolerance: 68.314742ms
	I0410 22:49:07.509297   58701 start.go:83] releasing machines lock for "default-k8s-diff-port-519831", held for 21.099785205s
	I0410 22:49:07.509328   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:07.509613   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetIP
	I0410 22:49:07.512255   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.512634   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.512667   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.512827   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:07.513364   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:07.513531   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:07.513610   58701 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:49:07.513649   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:07.513750   58701 ssh_runner.go:195] Run: cat /version.json
	I0410 22:49:07.513771   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:07.516338   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.516685   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.516776   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.516802   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.516951   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:07.517142   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.517161   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.517173   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.517310   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:07.517355   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:07.517470   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.517602   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:07.517604   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:07.517765   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:07.594218   58701 ssh_runner.go:195] Run: systemctl --version
	I0410 22:49:07.633783   58701 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:49:07.790430   58701 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 22:49:07.797279   58701 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:49:07.797358   58701 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:49:07.815457   58701 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0410 22:49:07.815488   58701 start.go:494] detecting cgroup driver to use...
	I0410 22:49:07.815561   58701 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:49:07.833038   58701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:49:07.848577   58701 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:49:07.848648   58701 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:49:07.863609   58701 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:49:07.878299   58701 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:49:07.999388   58701 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:49:08.155534   58701 docker.go:233] disabling docker service ...
	I0410 22:49:08.155613   58701 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:49:08.175545   58701 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:49:08.195923   58701 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:49:08.340282   58701 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:49:08.485647   58701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:49:08.500245   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:49:08.520493   58701 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0410 22:49:08.520582   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.535455   58701 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:49:08.535521   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.547058   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.559638   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.571374   58701 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:49:08.583796   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.598091   58701 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.622634   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.633858   58701 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:49:08.645114   58701 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0410 22:49:08.645167   58701 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0410 22:49:08.660204   58701 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:49:08.671345   58701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:49:08.804523   58701 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:49:08.953644   58701 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 22:49:08.953717   58701 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 22:49:08.958661   58701 start.go:562] Will wait 60s for crictl version
	I0410 22:49:08.958715   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:49:08.962938   58701 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 22:49:09.006335   58701 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 22:49:09.006425   58701 ssh_runner.go:195] Run: crio --version
	I0410 22:49:09.037315   58701 ssh_runner.go:195] Run: crio --version
	I0410 22:49:09.069366   58701 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0410 22:49:07.533174   57270 main.go:141] libmachine: (no-preload-646133) Calling .Start
	I0410 22:49:07.533352   57270 main.go:141] libmachine: (no-preload-646133) Ensuring networks are active...
	I0410 22:49:07.534117   57270 main.go:141] libmachine: (no-preload-646133) Ensuring network default is active
	I0410 22:49:07.534413   57270 main.go:141] libmachine: (no-preload-646133) Ensuring network mk-no-preload-646133 is active
	I0410 22:49:07.534851   57270 main.go:141] libmachine: (no-preload-646133) Getting domain xml...
	I0410 22:49:07.535553   57270 main.go:141] libmachine: (no-preload-646133) Creating domain...
	I0410 22:49:08.844990   57270 main.go:141] libmachine: (no-preload-646133) Waiting to get IP...
	I0410 22:49:08.845908   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:08.846363   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:08.846459   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:08.846332   59513 retry.go:31] will retry after 241.150391ms: waiting for machine to come up
	I0410 22:49:09.088961   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:09.089455   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:09.089489   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:09.089417   59513 retry.go:31] will retry after 349.96397ms: waiting for machine to come up
	I0410 22:49:09.441226   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:09.441799   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:09.441828   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:09.441754   59513 retry.go:31] will retry after 444.576999ms: waiting for machine to come up
	I0410 22:49:05.585408   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:06.085250   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:06.585503   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:07.085422   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:07.584909   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:08.084863   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:08.585859   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:09.085175   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:09.585660   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:10.085221   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:07.475385   58186 pod_ready.go:92] pod "coredns-76f75df574-j4kj8" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:07.475414   58186 pod_ready.go:81] duration metric: took 6.506993581s for pod "coredns-76f75df574-j4kj8" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:07.475424   58186 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:09.486133   58186 pod_ready.go:102] pod "etcd-embed-certs-706500" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:11.483972   58186 pod_ready.go:92] pod "etcd-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:11.483994   58186 pod_ready.go:81] duration metric: took 4.008564427s for pod "etcd-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.484005   58186 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.490340   58186 pod_ready.go:92] pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:11.490380   58186 pod_ready.go:81] duration metric: took 6.362017ms for pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.490399   58186 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.497078   58186 pod_ready.go:92] pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:11.497110   58186 pod_ready.go:81] duration metric: took 6.701645ms for pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.497124   58186 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8kzff" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.504091   58186 pod_ready.go:92] pod "kube-proxy-8kzff" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:11.504118   58186 pod_ready.go:81] duration metric: took 6.985136ms for pod "kube-proxy-8kzff" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.504132   58186 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.510619   58186 pod_ready.go:92] pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:11.510656   58186 pod_ready.go:81] duration metric: took 6.513031ms for pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.510674   58186 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:09.070592   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetIP
	I0410 22:49:09.073850   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:09.074163   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:09.074190   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:09.074388   58701 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0410 22:49:09.079170   58701 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:49:09.093764   58701 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-519831 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-519831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.170 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 22:49:09.093973   58701 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 22:49:09.094040   58701 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:49:09.140874   58701 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0410 22:49:09.140951   58701 ssh_runner.go:195] Run: which lz4
	I0410 22:49:09.146775   58701 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0410 22:49:09.152876   58701 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0410 22:49:09.152917   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0410 22:49:10.827934   58701 crio.go:462] duration metric: took 1.681191787s to copy over tarball
	I0410 22:49:10.828019   58701 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0410 22:49:09.888688   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:09.892576   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:09.892607   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:09.889179   59513 retry.go:31] will retry after 560.585608ms: waiting for machine to come up
	I0410 22:49:10.451001   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:10.451630   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:10.451663   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:10.451590   59513 retry.go:31] will retry after 601.519186ms: waiting for machine to come up
	I0410 22:49:11.054324   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:11.054664   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:11.054693   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:11.054653   59513 retry.go:31] will retry after 750.183717ms: waiting for machine to come up
	I0410 22:49:11.805908   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:11.806303   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:11.806331   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:11.806254   59513 retry.go:31] will retry after 883.805148ms: waiting for machine to come up
	I0410 22:49:12.691316   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:12.691861   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:12.691893   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:12.691804   59513 retry.go:31] will retry after 1.39605629s: waiting for machine to come up
	I0410 22:49:14.090350   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:14.090795   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:14.090821   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:14.090753   59513 retry.go:31] will retry after 1.388324423s: waiting for machine to come up
	I0410 22:49:10.585333   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:11.085466   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:11.585062   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:12.085191   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:12.585644   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:13.085615   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:13.585355   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:14.085270   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:14.584868   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:15.085639   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:13.521844   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:16.041569   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:13.328492   58701 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.500439721s)
	I0410 22:49:13.328534   58701 crio.go:469] duration metric: took 2.500564923s to extract the tarball
	I0410 22:49:13.328545   58701 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0410 22:49:13.367568   58701 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:49:13.415759   58701 crio.go:514] all images are preloaded for cri-o runtime.
	I0410 22:49:13.415780   58701 cache_images.go:84] Images are preloaded, skipping loading
	I0410 22:49:13.415788   58701 kubeadm.go:928] updating node { 192.168.72.170 8444 v1.29.3 crio true true} ...
	I0410 22:49:13.415899   58701 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-519831 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-519831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 22:49:13.415982   58701 ssh_runner.go:195] Run: crio config
	I0410 22:49:13.473019   58701 cni.go:84] Creating CNI manager for ""
	I0410 22:49:13.473046   58701 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:49:13.473063   58701 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 22:49:13.473100   58701 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.170 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-519831 NodeName:default-k8s-diff-port-519831 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0410 22:49:13.473261   58701 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.170
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-519831"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.170
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.170"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 22:49:13.473325   58701 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0410 22:49:13.487302   58701 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 22:49:13.487368   58701 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 22:49:13.498496   58701 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0410 22:49:13.518312   58701 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0410 22:49:13.537972   58701 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0410 22:49:13.558714   58701 ssh_runner.go:195] Run: grep 192.168.72.170	control-plane.minikube.internal$ /etc/hosts
	I0410 22:49:13.562886   58701 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.170	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:49:13.575957   58701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:49:13.706316   58701 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:49:13.725898   58701 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831 for IP: 192.168.72.170
	I0410 22:49:13.725924   58701 certs.go:194] generating shared ca certs ...
	I0410 22:49:13.725944   58701 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:49:13.726119   58701 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 22:49:13.726173   58701 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 22:49:13.726185   58701 certs.go:256] generating profile certs ...
	I0410 22:49:13.726297   58701 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/client.key
	I0410 22:49:13.726398   58701 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/apiserver.key.ff579077
	I0410 22:49:13.726454   58701 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/proxy-client.key
	I0410 22:49:13.726606   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 22:49:13.726644   58701 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 22:49:13.726656   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 22:49:13.726685   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 22:49:13.726725   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 22:49:13.726756   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 22:49:13.726811   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:49:13.727747   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 22:49:13.780060   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 22:49:13.818446   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 22:49:13.865986   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 22:49:13.897578   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0410 22:49:13.937123   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0410 22:49:13.970558   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 22:49:13.997678   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0410 22:49:14.025173   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 22:49:14.051190   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 22:49:14.079109   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 22:49:14.107547   58701 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 22:49:14.128029   58701 ssh_runner.go:195] Run: openssl version
	I0410 22:49:14.134686   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 22:49:14.148733   58701 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:14.154057   58701 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:14.154114   58701 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:14.160626   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 22:49:14.174406   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 22:49:14.187513   58701 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 22:49:14.193279   58701 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:49:14.193344   58701 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 22:49:14.199518   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 22:49:14.213538   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 22:49:14.225618   58701 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 22:49:14.230610   58701 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:49:14.230666   58701 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 22:49:14.236756   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 22:49:14.250041   58701 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:49:14.255320   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0410 22:49:14.262821   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0410 22:49:14.268854   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0410 22:49:14.275152   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0410 22:49:14.281598   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0410 22:49:14.287895   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0410 22:49:14.294125   58701 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-519831 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-519831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.170 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:49:14.294246   58701 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 22:49:14.294301   58701 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:49:14.332192   58701 cri.go:89] found id: ""
	I0410 22:49:14.332268   58701 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0410 22:49:14.343174   58701 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0410 22:49:14.343198   58701 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0410 22:49:14.343205   58701 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0410 22:49:14.343261   58701 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0410 22:49:14.355648   58701 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0410 22:49:14.357310   58701 kubeconfig.go:125] found "default-k8s-diff-port-519831" server: "https://192.168.72.170:8444"
	I0410 22:49:14.360713   58701 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0410 22:49:14.371972   58701 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.170
	I0410 22:49:14.372011   58701 kubeadm.go:1154] stopping kube-system containers ...
	I0410 22:49:14.372025   58701 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0410 22:49:14.372083   58701 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:49:14.410517   58701 cri.go:89] found id: ""
	I0410 22:49:14.410594   58701 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0410 22:49:14.428686   58701 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:49:14.443256   58701 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:49:14.443281   58701 kubeadm.go:156] found existing configuration files:
	
	I0410 22:49:14.443353   58701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0410 22:49:14.455086   58701 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:49:14.455156   58701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:49:14.466151   58701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0410 22:49:14.476799   58701 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:49:14.476852   58701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:49:14.487588   58701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0410 22:49:14.498476   58701 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:49:14.498534   58701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:49:14.509248   58701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0410 22:49:14.520223   58701 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:49:14.520287   58701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:49:14.531388   58701 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:49:14.542775   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:14.673733   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:15.773338   58701 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.099570437s)
	I0410 22:49:15.773385   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:15.985355   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:16.052996   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:16.126251   58701 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:49:16.126362   58701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:16.626615   58701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:17.127289   58701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:17.166269   58701 api_server.go:72] duration metric: took 1.040013076s to wait for apiserver process to appear ...
	I0410 22:49:17.166315   58701 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:49:17.166339   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:17.166964   58701 api_server.go:269] stopped: https://192.168.72.170:8444/healthz: Get "https://192.168.72.170:8444/healthz": dial tcp 192.168.72.170:8444: connect: connection refused
	I0410 22:49:15.480947   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:15.481358   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:15.481386   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:15.481309   59513 retry.go:31] will retry after 2.276682979s: waiting for machine to come up
	I0410 22:49:17.759404   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:17.759931   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:17.759975   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:17.759887   59513 retry.go:31] will retry after 2.254373826s: waiting for machine to come up
	I0410 22:49:15.585476   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:16.085404   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:16.585123   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:17.085713   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:17.584877   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:18.085601   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:18.585222   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:19.084891   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:19.585215   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:20.085668   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:18.519156   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:20.520053   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:17.667248   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:20.709507   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0410 22:49:20.709538   58701 api_server.go:103] status: https://192.168.72.170:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0410 22:49:20.709554   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:20.740392   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:49:20.740483   58701 api_server.go:103] status: https://192.168.72.170:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:49:21.166658   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:21.174343   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:49:21.174378   58701 api_server.go:103] status: https://192.168.72.170:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:49:21.667345   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:21.685078   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:49:21.685112   58701 api_server.go:103] status: https://192.168.72.170:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:49:22.166644   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:22.171611   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 200:
	ok
	I0410 22:49:22.178452   58701 api_server.go:141] control plane version: v1.29.3
	I0410 22:49:22.178484   58701 api_server.go:131] duration metric: took 5.012161431s to wait for apiserver health ...
	I0410 22:49:22.178493   58701 cni.go:84] Creating CNI manager for ""
	I0410 22:49:22.178499   58701 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:49:22.180370   58701 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0410 22:49:22.181768   58701 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0410 22:49:22.197462   58701 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0410 22:49:22.218348   58701 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:49:22.236800   58701 system_pods.go:59] 8 kube-system pods found
	I0410 22:49:22.236830   58701 system_pods.go:61] "coredns-76f75df574-ghnvx" [88ebd9b0-ecf0-4037-b5b0-547dad2354ba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0410 22:49:22.236837   58701 system_pods.go:61] "etcd-default-k8s-diff-port-519831" [e03c3075-b377-41a8-8f68-5a424fafd6a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0410 22:49:22.236843   58701 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-519831" [a538137b-08ce-4feb-a420-2e3ad7125b14] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0410 22:49:22.236849   58701 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-519831" [05d154e2-69cf-40dc-a4ba-ed0d65be4365] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0410 22:49:22.236861   58701 system_pods.go:61] "kube-proxy-5mbwx" [44724487-9539-4079-9fd6-40cb70208b95] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0410 22:49:22.236866   58701 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-519831" [a587ef20-140c-40b0-b306-d5f5c595f4a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0410 22:49:22.236871   58701 system_pods.go:61] "metrics-server-57f55c9bc5-9l2hc" [2f5cda2f-4d8f-4798-954e-5ef588f2b88f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:49:22.236876   58701 system_pods.go:61] "storage-provisioner" [e4e09f42-54ba-480e-a020-1ca071a54558] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0410 22:49:22.236884   58701 system_pods.go:74] duration metric: took 18.510987ms to wait for pod list to return data ...
	I0410 22:49:22.236893   58701 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:49:22.242143   58701 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:49:22.242167   58701 node_conditions.go:123] node cpu capacity is 2
	I0410 22:49:22.242177   58701 node_conditions.go:105] duration metric: took 5.279415ms to run NodePressure ...
	I0410 22:49:22.242192   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:22.532741   58701 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0410 22:49:22.537418   58701 kubeadm.go:733] kubelet initialised
	I0410 22:49:22.537444   58701 kubeadm.go:734] duration metric: took 4.675489ms waiting for restarted kubelet to initialise ...
	I0410 22:49:22.537453   58701 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:49:22.543364   58701 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-ghnvx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:22.549161   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "coredns-76f75df574-ghnvx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.549186   58701 pod_ready.go:81] duration metric: took 5.796619ms for pod "coredns-76f75df574-ghnvx" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:22.549196   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "coredns-76f75df574-ghnvx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.549207   58701 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:22.554131   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.554156   58701 pod_ready.go:81] duration metric: took 4.941026ms for pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:22.554165   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.554172   58701 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:22.558783   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.558812   58701 pod_ready.go:81] duration metric: took 4.633262ms for pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:22.558822   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.558828   58701 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:22.622314   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.622344   58701 pod_ready.go:81] duration metric: took 63.505681ms for pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:22.622356   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.622370   58701 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5mbwx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:23.022239   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "kube-proxy-5mbwx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.022266   58701 pod_ready.go:81] duration metric: took 399.888837ms for pod "kube-proxy-5mbwx" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:23.022275   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "kube-proxy-5mbwx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.022286   58701 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:23.422213   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.422245   58701 pod_ready.go:81] duration metric: took 399.950443ms for pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:23.422257   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.422270   58701 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:23.823832   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.823858   58701 pod_ready.go:81] duration metric: took 401.581123ms for pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:23.823868   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.823875   58701 pod_ready.go:38] duration metric: took 1.286413141s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:49:23.823889   58701 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0410 22:49:23.840663   58701 ops.go:34] apiserver oom_adj: -16
	I0410 22:49:23.840691   58701 kubeadm.go:591] duration metric: took 9.497479077s to restartPrimaryControlPlane
	I0410 22:49:23.840702   58701 kubeadm.go:393] duration metric: took 9.546582608s to StartCluster
	I0410 22:49:23.840718   58701 settings.go:142] acquiring lock: {Name:mk5dc8e9a07a91433645b19ffba859d70c73be71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:49:23.840795   58701 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:49:23.843350   58701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/kubeconfig: {Name:mkd6f498bec10d7b0d2291f83f5e27766227bc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:49:23.843613   58701 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.170 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0410 22:49:23.845385   58701 out.go:177] * Verifying Kubernetes components...
	I0410 22:49:23.843685   58701 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0410 22:49:23.846686   58701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:49:23.845421   58701 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-519831"
	I0410 22:49:23.846834   58701 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-519831"
	I0410 22:49:23.843826   58701 config.go:182] Loaded profile config "default-k8s-diff-port-519831": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	W0410 22:49:23.846852   58701 addons.go:243] addon storage-provisioner should already be in state true
	I0410 22:49:23.846901   58701 host.go:66] Checking if "default-k8s-diff-port-519831" exists ...
	I0410 22:49:23.845429   58701 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-519831"
	I0410 22:49:23.846969   58701 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-519831"
	I0410 22:49:23.845433   58701 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-519831"
	I0410 22:49:23.847069   58701 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-519831"
	W0410 22:49:23.847088   58701 addons.go:243] addon metrics-server should already be in state true
	I0410 22:49:23.847122   58701 host.go:66] Checking if "default-k8s-diff-port-519831" exists ...
	I0410 22:49:23.847349   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.847358   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.847381   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.847384   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.847495   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.847532   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.863090   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33089
	I0410 22:49:23.863240   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33493
	I0410 22:49:23.863685   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.863793   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.864315   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.864333   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.864356   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.864371   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.864741   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.864749   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.864949   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetState
	I0410 22:49:23.865210   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.865258   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.867599   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41111
	I0410 22:49:23.868035   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.868627   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.868652   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.868739   58701 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-519831"
	W0410 22:49:23.868757   58701 addons.go:243] addon default-storageclass should already be in state true
	I0410 22:49:23.868785   58701 host.go:66] Checking if "default-k8s-diff-port-519831" exists ...
	I0410 22:49:23.869023   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.869094   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.869136   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.869562   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.869630   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.881589   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41925
	I0410 22:49:23.881997   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.882429   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.882442   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.882719   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.882914   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetState
	I0410 22:49:23.884708   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:23.886865   58701 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0410 22:49:23.886946   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37555
	I0410 22:49:23.888493   58701 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0410 22:49:23.888511   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0410 22:49:23.888532   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:23.888850   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.889129   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39287
	I0410 22:49:23.889513   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.889536   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.889601   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.890020   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.890265   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.890285   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.890308   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetState
	I0410 22:49:23.890667   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.891458   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.891496   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.892090   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.892232   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:23.894143   58701 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:20.015689   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:20.016192   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:20.016230   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:20.016163   59513 retry.go:31] will retry after 2.611766259s: waiting for machine to come up
	I0410 22:49:22.629270   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:22.629704   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:22.629731   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:22.629644   59513 retry.go:31] will retry after 3.270808972s: waiting for machine to come up
	I0410 22:49:23.892695   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:23.892720   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:23.895489   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.895599   58701 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:49:23.895609   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0410 22:49:23.895623   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:23.896367   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:23.896558   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:23.896754   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:23.898964   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.899320   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:23.899355   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.899535   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:23.899715   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:23.899855   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:23.899999   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:23.910046   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35713
	I0410 22:49:23.910471   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.911056   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.911077   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.911445   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.911653   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetState
	I0410 22:49:23.913330   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:23.913603   58701 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0410 22:49:23.913619   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0410 22:49:23.913637   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:23.916303   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.916759   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:23.916820   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.916923   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:23.917137   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:23.917377   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:23.917517   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:24.067636   58701 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:49:24.087396   58701 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-519831" to be "Ready" ...
	I0410 22:49:24.204429   58701 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0410 22:49:24.204457   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0410 22:49:24.213319   58701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0410 22:49:24.224083   58701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:49:24.234156   58701 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0410 22:49:24.234182   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0410 22:49:24.273950   58701 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:49:24.273980   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0410 22:49:24.295822   58701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:49:24.580460   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:24.580498   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:24.580835   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:24.580853   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:24.580864   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:24.580872   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:24.580872   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Closing plugin on server side
	I0410 22:49:24.581102   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:24.581126   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:24.589648   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:24.589714   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:24.589981   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Closing plugin on server side
	I0410 22:49:24.590040   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:24.590062   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:25.339438   58701 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.043578779s)
	I0410 22:49:25.339489   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:25.339499   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:25.339451   58701 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.115333809s)
	I0410 22:49:25.339560   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:25.339593   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:25.339872   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:25.339897   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:25.339911   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:25.339924   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:25.339944   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Closing plugin on server side
	I0410 22:49:25.339956   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:25.339984   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:25.340004   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:25.340015   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:25.340149   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:25.340185   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:25.340203   58701 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-519831"
	I0410 22:49:25.341481   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:25.341497   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:25.344575   58701 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0410 22:49:20.585629   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:21.084898   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:21.585346   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:22.085672   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:22.585768   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:23.085613   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:23.585507   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:24.085104   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:24.585745   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:25.084858   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:23.017917   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:25.018591   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:27.019206   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:25.341622   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Closing plugin on server side
	I0410 22:49:25.345974   58701 addons.go:505] duration metric: took 1.502302613s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0410 22:49:26.094458   58701 node_ready.go:53] node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:25.904062   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:25.904580   57270 main.go:141] libmachine: (no-preload-646133) Found IP for machine: 192.168.50.17
	I0410 22:49:25.904608   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has current primary IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:25.904622   57270 main.go:141] libmachine: (no-preload-646133) Reserving static IP address...
	I0410 22:49:25.905076   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "no-preload-646133", mac: "52:54:00:35:62:0e", ip: "192.168.50.17"} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:25.905117   57270 main.go:141] libmachine: (no-preload-646133) DBG | skip adding static IP to network mk-no-preload-646133 - found existing host DHCP lease matching {name: "no-preload-646133", mac: "52:54:00:35:62:0e", ip: "192.168.50.17"}
	I0410 22:49:25.905134   57270 main.go:141] libmachine: (no-preload-646133) Reserved static IP address: 192.168.50.17
	I0410 22:49:25.905151   57270 main.go:141] libmachine: (no-preload-646133) Waiting for SSH to be available...
	I0410 22:49:25.905170   57270 main.go:141] libmachine: (no-preload-646133) DBG | Getting to WaitForSSH function...
	I0410 22:49:25.907397   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:25.907773   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:25.907796   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:25.907937   57270 main.go:141] libmachine: (no-preload-646133) DBG | Using SSH client type: external
	I0410 22:49:25.907960   57270 main.go:141] libmachine: (no-preload-646133) DBG | Using SSH private key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa (-rw-------)
	I0410 22:49:25.907979   57270 main.go:141] libmachine: (no-preload-646133) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.17 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0410 22:49:25.907989   57270 main.go:141] libmachine: (no-preload-646133) DBG | About to run SSH command:
	I0410 22:49:25.907997   57270 main.go:141] libmachine: (no-preload-646133) DBG | exit 0
	I0410 22:49:26.032683   57270 main.go:141] libmachine: (no-preload-646133) DBG | SSH cmd err, output: <nil>: 
	I0410 22:49:26.033065   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetConfigRaw
	I0410 22:49:26.033761   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetIP
	I0410 22:49:26.036545   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.036951   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.036982   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.037187   57270 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/config.json ...
	I0410 22:49:26.037403   57270 machine.go:94] provisionDockerMachine start ...
	I0410 22:49:26.037424   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:26.037655   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.039750   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.040081   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.040102   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.040285   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:26.040486   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.040657   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.040818   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:26.040972   57270 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:26.041180   57270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0410 22:49:26.041197   57270 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 22:49:26.149298   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0410 22:49:26.149335   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetMachineName
	I0410 22:49:26.149618   57270 buildroot.go:166] provisioning hostname "no-preload-646133"
	I0410 22:49:26.149647   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetMachineName
	I0410 22:49:26.149849   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.152432   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.152799   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.152829   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.152973   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:26.153233   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.153406   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.153571   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:26.153774   57270 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:26.153992   57270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0410 22:49:26.154010   57270 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-646133 && echo "no-preload-646133" | sudo tee /etc/hostname
	I0410 22:49:26.283760   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-646133
	
	I0410 22:49:26.283794   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.286605   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.286925   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.286955   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.287097   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:26.287277   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.287425   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.287551   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:26.287725   57270 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:26.287944   57270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0410 22:49:26.287969   57270 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-646133' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-646133/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-646133' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:49:26.402869   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:49:26.402905   57270 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:49:26.402945   57270 buildroot.go:174] setting up certificates
	I0410 22:49:26.402956   57270 provision.go:84] configureAuth start
	I0410 22:49:26.402973   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetMachineName
	I0410 22:49:26.403234   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetIP
	I0410 22:49:26.405718   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.406079   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.406119   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.406357   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.408549   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.408882   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.408917   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.409034   57270 provision.go:143] copyHostCerts
	I0410 22:49:26.409106   57270 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:49:26.409124   57270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:49:26.409177   57270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:49:26.409310   57270 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:49:26.409320   57270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:49:26.409341   57270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:49:26.409405   57270 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:49:26.409412   57270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:49:26.409430   57270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:49:26.409476   57270 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.no-preload-646133 san=[127.0.0.1 192.168.50.17 localhost minikube no-preload-646133]
	I0410 22:49:26.567556   57270 provision.go:177] copyRemoteCerts
	I0410 22:49:26.567611   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:49:26.567647   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.570205   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.570589   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.570614   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.570805   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:26.571034   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.571172   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:26.571294   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:49:26.655943   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:49:26.681691   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0410 22:49:26.706573   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0410 22:49:26.733054   57270 provision.go:87] duration metric: took 330.073783ms to configureAuth
	I0410 22:49:26.733088   57270 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:49:26.733276   57270 config.go:182] Loaded profile config "no-preload-646133": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.1
	I0410 22:49:26.733347   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.735910   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.736264   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.736295   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.736474   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:26.736648   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.736798   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.736925   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:26.737055   57270 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:26.737225   57270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0410 22:49:26.737241   57270 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:49:27.008174   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:49:27.008202   57270 machine.go:97] duration metric: took 970.785508ms to provisionDockerMachine
	I0410 22:49:27.008216   57270 start.go:293] postStartSetup for "no-preload-646133" (driver="kvm2")
	I0410 22:49:27.008236   57270 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:49:27.008263   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:27.008554   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:49:27.008580   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:27.011150   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.011561   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:27.011604   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.011900   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:27.012090   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:27.012274   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:27.012432   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:49:27.105247   57270 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:49:27.109842   57270 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:49:27.109868   57270 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:49:27.109927   57270 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:49:27.109993   57270 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:49:27.110080   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:49:27.121451   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:49:27.151797   57270 start.go:296] duration metric: took 143.569287ms for postStartSetup
	I0410 22:49:27.151836   57270 fix.go:56] duration metric: took 19.642403615s for fixHost
	I0410 22:49:27.151865   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:27.154454   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.154869   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:27.154903   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.154987   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:27.155193   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:27.155357   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:27.155512   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:27.155660   57270 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:27.155862   57270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0410 22:49:27.155875   57270 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 22:49:27.265609   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712789367.209761579
	
	I0410 22:49:27.265652   57270 fix.go:216] guest clock: 1712789367.209761579
	I0410 22:49:27.265662   57270 fix.go:229] Guest: 2024-04-10 22:49:27.209761579 +0000 UTC Remote: 2024-04-10 22:49:27.151840464 +0000 UTC m=+377.371052419 (delta=57.921115ms)
	I0410 22:49:27.265687   57270 fix.go:200] guest clock delta is within tolerance: 57.921115ms
	I0410 22:49:27.265697   57270 start.go:83] releasing machines lock for "no-preload-646133", held for 19.756293566s
	I0410 22:49:27.265724   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:27.265960   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetIP
	I0410 22:49:27.268735   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.269184   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:27.269216   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.269380   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:27.270014   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:27.270233   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:27.270331   57270 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:49:27.270376   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:27.270645   57270 ssh_runner.go:195] Run: cat /version.json
	I0410 22:49:27.270669   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:27.273542   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.273846   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.273986   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:27.274019   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.274140   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:27.274230   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:27.274259   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.274318   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:27.274400   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:27.274531   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:27.274536   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:27.274688   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:27.274723   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:49:27.274806   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:49:27.359922   57270 ssh_runner.go:195] Run: systemctl --version
	I0410 22:49:27.400885   57270 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:49:27.555260   57270 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 22:49:27.561275   57270 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:49:27.561333   57270 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:49:27.578478   57270 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0410 22:49:27.578502   57270 start.go:494] detecting cgroup driver to use...
	I0410 22:49:27.578567   57270 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:49:27.598020   57270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:49:27.613068   57270 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:49:27.613140   57270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:49:27.629253   57270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:49:27.644130   57270 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:49:27.791801   57270 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:49:27.952366   57270 docker.go:233] disabling docker service ...
	I0410 22:49:27.952477   57270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:49:27.968629   57270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:49:27.982330   57270 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:49:28.117396   57270 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:49:28.240808   57270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:49:28.257299   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:49:28.280918   57270 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0410 22:49:28.280991   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.296415   57270 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:49:28.296480   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.308602   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.319535   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.329812   57270 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:49:28.341466   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.354706   57270 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.374405   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.385094   57270 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:49:28.394412   57270 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0410 22:49:28.394466   57270 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0410 22:49:28.407654   57270 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:49:28.418381   57270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:49:28.525783   57270 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:49:28.678643   57270 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 22:49:28.678706   57270 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 22:49:28.683681   57270 start.go:562] Will wait 60s for crictl version
	I0410 22:49:28.683737   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:28.687703   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 22:49:28.725311   57270 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 22:49:28.725414   57270 ssh_runner.go:195] Run: crio --version
	I0410 22:49:28.755393   57270 ssh_runner.go:195] Run: crio --version
	I0410 22:49:28.788963   57270 out.go:177] * Preparing Kubernetes v1.30.0-rc.1 on CRI-O 1.29.1 ...
	I0410 22:49:28.790274   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetIP
	I0410 22:49:28.793091   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:28.793418   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:28.793452   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:28.793659   57270 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0410 22:49:28.798916   57270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:49:28.814575   57270 kubeadm.go:877] updating cluster {Name:no-preload-646133 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.1 ClusterName:no-preload-646133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.17 Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 22:49:28.814689   57270 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.1 and runtime crio
	I0410 22:49:28.814717   57270 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:49:28.852604   57270 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.1". assuming images are not preloaded.
	I0410 22:49:28.852627   57270 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-rc.1 registry.k8s.io/kube-controller-manager:v1.30.0-rc.1 registry.k8s.io/kube-scheduler:v1.30.0-rc.1 registry.k8s.io/kube-proxy:v1.30.0-rc.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0410 22:49:28.852698   57270 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:28.852707   57270 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.1
	I0410 22:49:28.852733   57270 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0410 22:49:28.852756   57270 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0410 22:49:28.852803   57270 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.1
	I0410 22:49:28.852870   57270 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.1
	I0410 22:49:28.852890   57270 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.1
	I0410 22:49:28.852917   57270 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0410 22:49:28.854348   57270 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.1
	I0410 22:49:28.854354   57270 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.1
	I0410 22:49:28.854378   57270 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0410 22:49:28.854419   57270 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0410 22:49:28.854421   57270 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.1
	I0410 22:49:28.854355   57270 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.1
	I0410 22:49:28.854353   57270 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:28.854740   57270 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0410 22:49:29.066608   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0410 22:49:29.072486   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-rc.1
	I0410 22:49:29.073347   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0410 22:49:29.075270   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0410 22:49:29.082649   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-rc.1
	I0410 22:49:29.085737   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-rc.1
	I0410 22:49:29.093699   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-rc.1
	I0410 22:49:29.290780   57270 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-rc.1" does not exist at hash "ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b" in container runtime
	I0410 22:49:29.290810   57270 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0410 22:49:29.290839   57270 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0410 22:49:29.290837   57270 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-rc.1
	I0410 22:49:29.290849   57270 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0410 22:49:29.290871   57270 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0410 22:49:29.290882   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.290902   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.290882   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.304346   57270 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-rc.1" does not exist at hash "69f89e5e13a4119f8752cd7f0fb30e3db4ce480a5e08580a3bf72597464bd061" in container runtime
	I0410 22:49:29.304409   57270 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-rc.1
	I0410 22:49:29.304459   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.304510   57270 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-rc.1" does not exist at hash "bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895" in container runtime
	I0410 22:49:29.304599   57270 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-rc.1
	I0410 22:49:29.304635   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.304563   57270 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-rc.1" does not exist at hash "577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090" in container runtime
	I0410 22:49:29.304689   57270 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.1
	I0410 22:49:29.304738   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.311219   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0410 22:49:29.311264   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.1
	I0410 22:49:29.311311   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0410 22:49:29.324663   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-rc.1
	I0410 22:49:29.324770   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.1
	I0410 22:49:29.324855   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-rc.1
	I0410 22:49:29.442426   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0410 22:49:29.442541   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0410 22:49:29.458416   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0410 22:49:29.458526   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0410 22:49:29.468890   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.1
	I0410 22:49:29.468998   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.1
	I0410 22:49:29.481365   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.1
	I0410 22:49:29.481482   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.1
	I0410 22:49:29.498862   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.1
	I0410 22:49:29.498899   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0410 22:49:29.498913   57270 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0410 22:49:29.498927   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.1
	I0410 22:49:29.498951   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.1 (exists)
	I0410 22:49:29.498957   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0410 22:49:29.498964   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.1
	I0410 22:49:29.498982   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-rc.1 (exists)
	I0410 22:49:29.499012   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.1
	I0410 22:49:29.498926   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0410 22:49:29.507249   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.1 (exists)
	I0410 22:49:29.507282   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.1 (exists)
	I0410 22:49:29.751612   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:25.585095   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:26.085119   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:26.585846   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:27.084920   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:27.585251   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:28.084926   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:28.585643   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:29.084937   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:29.585666   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:30.085088   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:29.518476   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:31.518837   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:28.592323   58701 node_ready.go:53] node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:31.098027   58701 node_ready.go:53] node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:31.591789   58701 node_ready.go:49] node "default-k8s-diff-port-519831" has status "Ready":"True"
	I0410 22:49:31.591822   58701 node_ready.go:38] duration metric: took 7.504383585s for node "default-k8s-diff-port-519831" to be "Ready" ...
	I0410 22:49:31.591835   58701 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:49:31.599103   58701 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-ghnvx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:31.607758   58701 pod_ready.go:92] pod "coredns-76f75df574-ghnvx" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:31.607787   58701 pod_ready.go:81] duration metric: took 8.655521ms for pod "coredns-76f75df574-ghnvx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:31.607801   58701 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:33.690936   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.191950196s)
	I0410 22:49:33.690965   57270 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.939318786s)
	I0410 22:49:33.691014   57270 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0410 22:49:33.691045   57270 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:33.690973   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0410 22:49:33.691091   57270 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.1
	I0410 22:49:33.691101   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:33.691163   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.1
	I0410 22:49:33.695868   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:30.585515   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:31.085273   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:31.585347   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:32.084892   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:32.585361   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:33.085648   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:33.585256   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:34.084938   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:34.585005   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:35.085466   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:34.018733   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:36.019904   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:33.615785   58701 pod_ready.go:102] pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:35.115811   58701 pod_ready.go:92] pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:35.115846   58701 pod_ready.go:81] duration metric: took 3.508038321s for pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:35.115856   58701 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.123593   58701 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:37.123624   58701 pod_ready.go:81] duration metric: took 2.007760022s for pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.123638   58701 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.130390   58701 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:37.130421   58701 pod_ready.go:81] duration metric: took 6.771239ms for pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.130436   58701 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5mbwx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.136219   58701 pod_ready.go:92] pod "kube-proxy-5mbwx" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:37.136253   58701 pod_ready.go:81] duration metric: took 5.809077ms for pod "kube-proxy-5mbwx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.136265   58701 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.142909   58701 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:37.142939   58701 pod_ready.go:81] duration metric: took 6.664922ms for pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.142954   58701 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:35.767190   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.1: (2.075997626s)
	I0410 22:49:35.767227   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.1 from cache
	I0410 22:49:35.767261   57270 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-rc.1
	I0410 22:49:35.767278   57270 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.071386498s)
	I0410 22:49:35.767326   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.1
	I0410 22:49:35.767327   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0410 22:49:35.767497   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0410 22:49:35.773679   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0410 22:49:37.666289   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.1: (1.898906389s)
	I0410 22:49:37.666326   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.1 from cache
	I0410 22:49:37.666358   57270 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0410 22:49:37.666422   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0410 22:49:39.652778   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.986322091s)
	I0410 22:49:39.652820   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0410 22:49:39.652855   57270 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.1
	I0410 22:49:39.652951   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.1
	I0410 22:49:35.585228   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:36.085699   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:36.585690   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:37.085760   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:37.584867   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:37.584947   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:37.625964   57719 cri.go:89] found id: ""
	I0410 22:49:37.625989   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.625996   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:37.626001   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:37.626046   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:37.669151   57719 cri.go:89] found id: ""
	I0410 22:49:37.669178   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.669188   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:37.669194   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:37.669242   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:37.711426   57719 cri.go:89] found id: ""
	I0410 22:49:37.711456   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.711466   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:37.711474   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:37.711538   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:37.754678   57719 cri.go:89] found id: ""
	I0410 22:49:37.754707   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.754719   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:37.754726   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:37.754809   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:37.795259   57719 cri.go:89] found id: ""
	I0410 22:49:37.795291   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.795301   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:37.795307   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:37.795375   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:37.836961   57719 cri.go:89] found id: ""
	I0410 22:49:37.836994   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.837004   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:37.837011   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:37.837075   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:37.876195   57719 cri.go:89] found id: ""
	I0410 22:49:37.876223   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.876233   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:37.876239   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:37.876290   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:37.911688   57719 cri.go:89] found id: ""
	I0410 22:49:37.911715   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.911725   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:37.911736   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:37.911751   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:37.954690   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:37.954734   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:38.006731   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:38.006771   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:38.024290   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:38.024314   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:38.148504   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:38.148529   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:38.148561   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:38.519483   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:40.520822   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:39.150543   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:41.151300   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:42.217749   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.1: (2.564772479s)
	I0410 22:49:42.217778   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.1 from cache
	I0410 22:49:42.217802   57270 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.1
	I0410 22:49:42.217843   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.1
	I0410 22:49:44.577826   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.1: (2.359955682s)
	I0410 22:49:44.577865   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.1 from cache
	I0410 22:49:44.577892   57270 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0410 22:49:44.577940   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0410 22:49:40.726314   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:40.743098   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:40.743168   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:40.794673   57719 cri.go:89] found id: ""
	I0410 22:49:40.794697   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.794704   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:40.794710   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:40.794756   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:40.836274   57719 cri.go:89] found id: ""
	I0410 22:49:40.836308   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.836319   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:40.836327   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:40.836408   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:40.882249   57719 cri.go:89] found id: ""
	I0410 22:49:40.882276   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.882285   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:40.882292   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:40.882357   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:40.925829   57719 cri.go:89] found id: ""
	I0410 22:49:40.925867   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.925878   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:40.925885   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:40.925936   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:40.978494   57719 cri.go:89] found id: ""
	I0410 22:49:40.978529   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.978540   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:40.978547   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:40.978611   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:41.020935   57719 cri.go:89] found id: ""
	I0410 22:49:41.020964   57719 logs.go:276] 0 containers: []
	W0410 22:49:41.020975   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:41.020982   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:41.021040   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:41.060779   57719 cri.go:89] found id: ""
	I0410 22:49:41.060812   57719 logs.go:276] 0 containers: []
	W0410 22:49:41.060824   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:41.060831   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:41.060885   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:41.119604   57719 cri.go:89] found id: ""
	I0410 22:49:41.119632   57719 logs.go:276] 0 containers: []
	W0410 22:49:41.119643   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:41.119653   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:41.119667   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:41.188739   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:41.188774   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:41.203682   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:41.203735   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:41.293423   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:41.293451   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:41.293468   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:41.366606   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:41.366649   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:43.914447   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:43.930350   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:43.930439   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:43.968867   57719 cri.go:89] found id: ""
	I0410 22:49:43.968921   57719 logs.go:276] 0 containers: []
	W0410 22:49:43.968932   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:43.968939   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:43.969012   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:44.010143   57719 cri.go:89] found id: ""
	I0410 22:49:44.010169   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.010181   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:44.010188   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:44.010264   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:44.048610   57719 cri.go:89] found id: ""
	I0410 22:49:44.048637   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.048645   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:44.048651   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:44.048697   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:44.105939   57719 cri.go:89] found id: ""
	I0410 22:49:44.105973   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.106001   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:44.106009   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:44.106086   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:44.149699   57719 cri.go:89] found id: ""
	I0410 22:49:44.149726   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.149735   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:44.149743   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:44.149803   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:44.193131   57719 cri.go:89] found id: ""
	I0410 22:49:44.193159   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.193167   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:44.193173   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:44.193255   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:44.233751   57719 cri.go:89] found id: ""
	I0410 22:49:44.233781   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.233789   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:44.233801   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:44.233868   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:44.284404   57719 cri.go:89] found id: ""
	I0410 22:49:44.284432   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.284441   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:44.284449   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:44.284461   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:44.330082   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:44.330118   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:44.383452   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:44.383487   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:44.399604   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:44.399632   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:44.476328   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:44.476368   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:44.476415   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:43.019922   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:45.519954   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:43.650596   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:45.651668   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:45.537183   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0410 22:49:45.537228   57270 cache_images.go:123] Successfully loaded all cached images
	I0410 22:49:45.537235   57270 cache_images.go:92] duration metric: took 16.68459637s to LoadCachedImages
	I0410 22:49:45.537249   57270 kubeadm.go:928] updating node { 192.168.50.17 8443 v1.30.0-rc.1 crio true true} ...
	I0410 22:49:45.537401   57270 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-646133 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.1 ClusterName:no-preload-646133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 22:49:45.537476   57270 ssh_runner.go:195] Run: crio config
	I0410 22:49:45.587002   57270 cni.go:84] Creating CNI manager for ""
	I0410 22:49:45.587031   57270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:49:45.587047   57270 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 22:49:45.587069   57270 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.17 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-646133 NodeName:no-preload-646133 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.17"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.17 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0410 22:49:45.587205   57270 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.17
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-646133"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.17
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.17"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 22:49:45.587272   57270 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.1
	I0410 22:49:45.600694   57270 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 22:49:45.600758   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 22:49:45.613884   57270 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0410 22:49:45.633871   57270 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0410 22:49:45.654733   57270 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0410 22:49:45.673976   57270 ssh_runner.go:195] Run: grep 192.168.50.17	control-plane.minikube.internal$ /etc/hosts
	I0410 22:49:45.678260   57270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.17	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:49:45.693499   57270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:49:45.819034   57270 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:49:45.838775   57270 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133 for IP: 192.168.50.17
	I0410 22:49:45.838799   57270 certs.go:194] generating shared ca certs ...
	I0410 22:49:45.838819   57270 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:49:45.839010   57270 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 22:49:45.839064   57270 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 22:49:45.839078   57270 certs.go:256] generating profile certs ...
	I0410 22:49:45.839175   57270 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/client.key
	I0410 22:49:45.839256   57270 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/apiserver.key.d257fb06
	I0410 22:49:45.839310   57270 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/proxy-client.key
	I0410 22:49:45.839480   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 22:49:45.839521   57270 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 22:49:45.839531   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 22:49:45.839551   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 22:49:45.839608   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 22:49:45.839633   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 22:49:45.839674   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:49:45.840315   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 22:49:45.897688   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 22:49:45.932242   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 22:49:45.979537   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 22:49:46.020562   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0410 22:49:46.057254   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0410 22:49:46.084070   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 22:49:46.112807   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0410 22:49:46.141650   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 22:49:46.170167   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 22:49:46.196917   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 22:49:46.222645   57270 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 22:49:46.242626   57270 ssh_runner.go:195] Run: openssl version
	I0410 22:49:46.249048   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 22:49:46.265110   57270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:46.270018   57270 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:46.270083   57270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:46.276298   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 22:49:46.288165   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 22:49:46.299040   57270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 22:49:46.303584   57270 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:49:46.303627   57270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 22:49:46.309278   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 22:49:46.319990   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 22:49:46.331654   57270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 22:49:46.336700   57270 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:49:46.336750   57270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 22:49:46.342767   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 22:49:46.355005   57270 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:49:46.359870   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0410 22:49:46.366270   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0410 22:49:46.372625   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0410 22:49:46.379270   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0410 22:49:46.386312   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0410 22:49:46.392796   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0410 22:49:46.399209   57270 kubeadm.go:391] StartCluster: {Name:no-preload-646133 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.1 ClusterName:no-preload-646133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.17 Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:49:46.399318   57270 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 22:49:46.399405   57270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:49:46.439061   57270 cri.go:89] found id: ""
	I0410 22:49:46.439149   57270 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0410 22:49:46.450243   57270 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0410 22:49:46.450265   57270 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0410 22:49:46.450271   57270 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0410 22:49:46.450323   57270 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0410 22:49:46.460553   57270 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0410 22:49:46.461608   57270 kubeconfig.go:125] found "no-preload-646133" server: "https://192.168.50.17:8443"
	I0410 22:49:46.464469   57270 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0410 22:49:46.474775   57270 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.17
	I0410 22:49:46.474808   57270 kubeadm.go:1154] stopping kube-system containers ...
	I0410 22:49:46.474820   57270 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0410 22:49:46.474860   57270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:49:46.514933   57270 cri.go:89] found id: ""
	I0410 22:49:46.515010   57270 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0410 22:49:46.533830   57270 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:49:46.547026   57270 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:49:46.547042   57270 kubeadm.go:156] found existing configuration files:
	
	I0410 22:49:46.547081   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:49:46.557093   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:49:46.557157   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:49:46.567102   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:49:46.576939   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:49:46.576998   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:49:46.586921   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:49:46.596189   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:49:46.596260   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:49:46.607803   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:49:46.618166   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:49:46.618240   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:49:46.628406   57270 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:49:46.638748   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:46.767824   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:48.028868   57270 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.261006059s)
	I0410 22:49:48.028907   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:48.253185   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:48.323164   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:48.404069   57270 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:49:48.404153   57270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:48.904557   57270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:49.404477   57270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:49.437891   57270 api_server.go:72] duration metric: took 1.033818826s to wait for apiserver process to appear ...
	I0410 22:49:49.437927   57270 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:49:49.437953   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:49.438623   57270 api_server.go:269] stopped: https://192.168.50.17:8443/healthz: Get "https://192.168.50.17:8443/healthz": dial tcp 192.168.50.17:8443: connect: connection refused
	I0410 22:49:47.054122   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:47.069583   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:47.069654   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:47.113953   57719 cri.go:89] found id: ""
	I0410 22:49:47.113981   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.113989   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:47.113995   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:47.114054   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:47.156770   57719 cri.go:89] found id: ""
	I0410 22:49:47.156798   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.156808   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:47.156814   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:47.156891   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:47.195227   57719 cri.go:89] found id: ""
	I0410 22:49:47.195252   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.195261   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:47.195266   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:47.195328   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:47.238109   57719 cri.go:89] found id: ""
	I0410 22:49:47.238138   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.238150   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:47.238157   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:47.238212   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:47.285062   57719 cri.go:89] found id: ""
	I0410 22:49:47.285093   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.285101   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:47.285108   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:47.285185   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:47.324635   57719 cri.go:89] found id: ""
	I0410 22:49:47.324663   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.324670   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:47.324676   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:47.324744   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:47.365404   57719 cri.go:89] found id: ""
	I0410 22:49:47.365437   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.365445   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:47.365468   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:47.365535   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:47.412296   57719 cri.go:89] found id: ""
	I0410 22:49:47.412335   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.412346   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:47.412367   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:47.412384   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:47.497998   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:47.498019   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:47.498033   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:47.590502   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:47.590536   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:47.647665   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:47.647692   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:47.697704   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:47.697741   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:50.213410   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:50.229408   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:50.229488   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:50.268514   57719 cri.go:89] found id: ""
	I0410 22:49:50.268545   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.268556   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:50.268563   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:50.268620   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:50.308733   57719 cri.go:89] found id: ""
	I0410 22:49:50.308762   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.308790   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:50.308796   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:50.308857   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:50.353929   57719 cri.go:89] found id: ""
	I0410 22:49:50.353966   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.353977   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:50.353985   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:50.354043   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:50.397979   57719 cri.go:89] found id: ""
	I0410 22:49:50.398009   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.398019   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:50.398026   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:50.398086   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:47.521284   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:50.018571   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:52.020874   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:48.151768   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:50.151820   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:49.939075   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:52.355813   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0410 22:49:52.355855   57270 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0410 22:49:52.355868   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:52.502702   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0410 22:49:52.502733   57270 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0410 22:49:52.502796   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:52.509360   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0410 22:49:52.509401   57270 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0410 22:49:52.939056   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:52.946114   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0410 22:49:52.946154   57270 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0410 22:49:53.438741   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:53.444154   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0410 22:49:53.444187   57270 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0410 22:49:53.938848   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:53.947578   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 200:
	ok
	I0410 22:49:53.956247   57270 api_server.go:141] control plane version: v1.30.0-rc.1
	I0410 22:49:53.956281   57270 api_server.go:131] duration metric: took 4.518344859s to wait for apiserver health ...
	I0410 22:49:53.956292   57270 cni.go:84] Creating CNI manager for ""
	I0410 22:49:53.956301   57270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:49:53.958053   57270 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0410 22:49:53.959420   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0410 22:49:53.973242   57270 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0410 22:49:54.004623   57270 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:49:54.024138   57270 system_pods.go:59] 8 kube-system pods found
	I0410 22:49:54.024185   57270 system_pods.go:61] "coredns-7db6d8ff4d-lbcp6" [1ff36529-d718-41e7-9b61-54ba32efab0e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0410 22:49:54.024195   57270 system_pods.go:61] "etcd-no-preload-646133" [a704a953-1418-4425-8ac1-272c632050c7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0410 22:49:54.024214   57270 system_pods.go:61] "kube-apiserver-no-preload-646133" [90d4ff18-767c-4dbf-b4ad-ff02cb3d542f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0410 22:49:54.024231   57270 system_pods.go:61] "kube-controller-manager-no-preload-646133" [82c0778e-690f-41a6-a57f-017ab79fd029] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0410 22:49:54.024243   57270 system_pods.go:61] "kube-proxy-v5fbl" [002efd18-4375-455b-9b4a-15bb739120e0] Running
	I0410 22:49:54.024252   57270 system_pods.go:61] "kube-scheduler-no-preload-646133" [fa9898bc-36a6-4cc4-91e6-bba4ccd22d9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0410 22:49:54.024264   57270 system_pods.go:61] "metrics-server-569cc877fc-pw276" [22de5c2f-13ab-4f69-8eb6-ec4a3c3d1e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:49:54.024277   57270 system_pods.go:61] "storage-provisioner" [1028921e-3924-4614-bcb6-f949c18e9e4e] Running
	I0410 22:49:54.024287   57270 system_pods.go:74] duration metric: took 19.638409ms to wait for pod list to return data ...
	I0410 22:49:54.024301   57270 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:49:54.031666   57270 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:49:54.031694   57270 node_conditions.go:123] node cpu capacity is 2
	I0410 22:49:54.031705   57270 node_conditions.go:105] duration metric: took 7.394201ms to run NodePressure ...
	I0410 22:49:54.031720   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:54.339352   57270 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0410 22:49:54.345115   57270 kubeadm.go:733] kubelet initialised
	I0410 22:49:54.345146   57270 kubeadm.go:734] duration metric: took 5.76519ms waiting for restarted kubelet to initialise ...
	I0410 22:49:54.345156   57270 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:49:54.352254   57270 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-lbcp6" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:50.436191   57719 cri.go:89] found id: ""
	I0410 22:49:50.436222   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.436234   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:50.436241   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:50.436316   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:50.476462   57719 cri.go:89] found id: ""
	I0410 22:49:50.476486   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.476494   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:50.476499   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:50.476557   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:50.520025   57719 cri.go:89] found id: ""
	I0410 22:49:50.520054   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.520063   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:50.520071   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:50.520127   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:50.564535   57719 cri.go:89] found id: ""
	I0410 22:49:50.564570   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.564581   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:50.564593   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:50.564624   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:50.620587   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:50.620629   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:50.634802   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:50.634832   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:50.707625   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:50.707655   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:50.707671   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:50.791935   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:50.791970   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:53.339109   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:53.361555   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:53.361632   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:53.428170   57719 cri.go:89] found id: ""
	I0410 22:49:53.428202   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.428212   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:53.428219   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:53.428281   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:53.501929   57719 cri.go:89] found id: ""
	I0410 22:49:53.501957   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.501968   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:53.501977   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:53.502055   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:53.548844   57719 cri.go:89] found id: ""
	I0410 22:49:53.548871   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.548890   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:53.548897   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:53.548949   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:53.595056   57719 cri.go:89] found id: ""
	I0410 22:49:53.595081   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.595090   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:53.595098   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:53.595153   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:53.638885   57719 cri.go:89] found id: ""
	I0410 22:49:53.638920   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.638938   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:53.638946   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:53.639046   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:53.685526   57719 cri.go:89] found id: ""
	I0410 22:49:53.685565   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.685573   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:53.685579   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:53.685650   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:53.725084   57719 cri.go:89] found id: ""
	I0410 22:49:53.725112   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.725119   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:53.725125   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:53.725172   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:53.767031   57719 cri.go:89] found id: ""
	I0410 22:49:53.767062   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.767072   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:53.767083   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:53.767103   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:53.826570   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:53.826618   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:53.843784   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:53.843822   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:53.926277   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:53.926299   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:53.926317   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:54.024735   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:54.024782   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:54.519305   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:56.520139   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:52.651382   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:55.149798   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:57.150803   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:56.359479   57270 pod_ready.go:102] pod "coredns-7db6d8ff4d-lbcp6" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:58.859341   57270 pod_ready.go:102] pod "coredns-7db6d8ff4d-lbcp6" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:56.586265   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:56.602113   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:56.602200   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:56.647041   57719 cri.go:89] found id: ""
	I0410 22:49:56.647074   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.647086   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:56.647094   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:56.647168   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:56.688053   57719 cri.go:89] found id: ""
	I0410 22:49:56.688086   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.688096   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:56.688104   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:56.688190   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:56.729176   57719 cri.go:89] found id: ""
	I0410 22:49:56.729210   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.729221   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:56.729229   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:56.729293   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:56.768877   57719 cri.go:89] found id: ""
	I0410 22:49:56.768905   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.768913   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:56.768919   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:56.768966   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:56.807228   57719 cri.go:89] found id: ""
	I0410 22:49:56.807274   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.807286   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:56.807294   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:56.807361   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:56.848183   57719 cri.go:89] found id: ""
	I0410 22:49:56.848216   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.848224   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:56.848230   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:56.848284   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:56.887894   57719 cri.go:89] found id: ""
	I0410 22:49:56.887923   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.887931   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:56.887937   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:56.887993   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:56.926908   57719 cri.go:89] found id: ""
	I0410 22:49:56.926935   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.926944   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:56.926952   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:56.926968   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:57.012614   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:57.012640   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:57.012657   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:57.098735   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:57.098784   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:57.140798   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:57.140831   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:57.204239   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:57.204283   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:59.720328   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:59.735964   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:59.736042   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:59.774351   57719 cri.go:89] found id: ""
	I0410 22:49:59.774383   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.774393   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:59.774407   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:59.774468   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:59.817222   57719 cri.go:89] found id: ""
	I0410 22:49:59.817248   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.817255   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:59.817260   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:59.817310   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:59.854551   57719 cri.go:89] found id: ""
	I0410 22:49:59.854582   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.854594   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:59.854602   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:59.854656   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:59.894334   57719 cri.go:89] found id: ""
	I0410 22:49:59.894367   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.894375   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:59.894381   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:59.894442   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:59.932446   57719 cri.go:89] found id: ""
	I0410 22:49:59.932472   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.932482   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:59.932489   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:59.932552   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:59.969168   57719 cri.go:89] found id: ""
	I0410 22:49:59.969193   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.969201   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:59.969209   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:59.969273   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:00.006918   57719 cri.go:89] found id: ""
	I0410 22:50:00.006960   57719 logs.go:276] 0 containers: []
	W0410 22:50:00.006972   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:00.006979   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:00.007036   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:00.050380   57719 cri.go:89] found id: ""
	I0410 22:50:00.050411   57719 logs.go:276] 0 containers: []
	W0410 22:50:00.050424   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:00.050433   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:00.050454   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:00.066340   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:00.066366   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:00.146454   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:00.146479   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:00.146494   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:00.231174   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:00.231225   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:00.278732   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:00.278759   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:59.020938   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:01.518584   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:59.151137   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:01.650307   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:01.359992   57270 pod_ready.go:92] pod "coredns-7db6d8ff4d-lbcp6" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:01.360021   57270 pod_ready.go:81] duration metric: took 7.007734788s for pod "coredns-7db6d8ff4d-lbcp6" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:01.360035   57270 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:02.867322   57270 pod_ready.go:92] pod "etcd-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:02.867349   57270 pod_ready.go:81] duration metric: took 1.507305949s for pod "etcd-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:02.867362   57270 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:02.833035   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:02.847316   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:02.847380   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:02.888793   57719 cri.go:89] found id: ""
	I0410 22:50:02.888821   57719 logs.go:276] 0 containers: []
	W0410 22:50:02.888832   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:02.888840   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:02.888897   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:02.926495   57719 cri.go:89] found id: ""
	I0410 22:50:02.926525   57719 logs.go:276] 0 containers: []
	W0410 22:50:02.926535   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:02.926542   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:02.926603   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:02.966185   57719 cri.go:89] found id: ""
	I0410 22:50:02.966217   57719 logs.go:276] 0 containers: []
	W0410 22:50:02.966227   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:02.966233   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:02.966295   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:03.007383   57719 cri.go:89] found id: ""
	I0410 22:50:03.007408   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.007414   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:03.007420   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:03.007490   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:03.044245   57719 cri.go:89] found id: ""
	I0410 22:50:03.044273   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.044281   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:03.044292   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:03.044367   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:03.078820   57719 cri.go:89] found id: ""
	I0410 22:50:03.078849   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.078859   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:03.078866   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:03.078927   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:03.117205   57719 cri.go:89] found id: ""
	I0410 22:50:03.117233   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.117244   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:03.117251   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:03.117313   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:03.155698   57719 cri.go:89] found id: ""
	I0410 22:50:03.155725   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.155735   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:03.155743   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:03.155758   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:03.231685   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:03.231712   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:03.231724   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:03.315122   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:03.315167   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:03.361151   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:03.361186   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:03.412134   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:03.412168   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:04.017523   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:06.024382   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:04.150291   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:06.151488   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:04.873656   57270 pod_ready.go:102] pod "kube-apiserver-no-preload-646133" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:05.874079   57270 pod_ready.go:92] pod "kube-apiserver-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:05.874106   57270 pod_ready.go:81] duration metric: took 3.006735064s for pod "kube-apiserver-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:05.874116   57270 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:07.880447   57270 pod_ready.go:102] pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:08.881209   57270 pod_ready.go:92] pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:08.881241   57270 pod_ready.go:81] duration metric: took 3.007117254s for pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:08.881271   57270 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v5fbl" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:08.887939   57270 pod_ready.go:92] pod "kube-proxy-v5fbl" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:08.887963   57270 pod_ready.go:81] duration metric: took 6.68304ms for pod "kube-proxy-v5fbl" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:08.887975   57270 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:08.894389   57270 pod_ready.go:92] pod "kube-scheduler-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:08.894415   57270 pod_ready.go:81] duration metric: took 6.43215ms for pod "kube-scheduler-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:08.894428   57270 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:05.928116   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:05.942237   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:05.942337   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:05.983813   57719 cri.go:89] found id: ""
	I0410 22:50:05.983842   57719 logs.go:276] 0 containers: []
	W0410 22:50:05.983853   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:05.983861   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:05.983945   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:06.024590   57719 cri.go:89] found id: ""
	I0410 22:50:06.024618   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.024626   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:06.024637   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:06.024698   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:06.063040   57719 cri.go:89] found id: ""
	I0410 22:50:06.063075   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.063087   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:06.063094   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:06.063160   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:06.102224   57719 cri.go:89] found id: ""
	I0410 22:50:06.102250   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.102259   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:06.102273   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:06.102342   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:06.144202   57719 cri.go:89] found id: ""
	I0410 22:50:06.144229   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.144236   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:06.144242   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:06.144288   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:06.189215   57719 cri.go:89] found id: ""
	I0410 22:50:06.189243   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.189250   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:06.189256   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:06.189308   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:06.225218   57719 cri.go:89] found id: ""
	I0410 22:50:06.225247   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.225258   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:06.225266   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:06.225330   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:06.265229   57719 cri.go:89] found id: ""
	I0410 22:50:06.265262   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.265273   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:06.265283   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:06.265306   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:06.279794   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:06.279825   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:06.348038   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:06.348063   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:06.348079   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:06.431293   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:06.431339   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:06.476033   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:06.476060   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:09.032099   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:09.046628   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:09.046765   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:09.086900   57719 cri.go:89] found id: ""
	I0410 22:50:09.086928   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.086936   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:09.086942   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:09.086998   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:09.124989   57719 cri.go:89] found id: ""
	I0410 22:50:09.125018   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.125028   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:09.125035   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:09.125096   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:09.163720   57719 cri.go:89] found id: ""
	I0410 22:50:09.163749   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.163761   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:09.163769   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:09.163822   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:09.203846   57719 cri.go:89] found id: ""
	I0410 22:50:09.203875   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.203883   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:09.203888   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:09.203945   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:09.242974   57719 cri.go:89] found id: ""
	I0410 22:50:09.243002   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.243016   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:09.243024   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:09.243092   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:09.278664   57719 cri.go:89] found id: ""
	I0410 22:50:09.278687   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.278694   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:09.278700   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:09.278762   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:09.313335   57719 cri.go:89] found id: ""
	I0410 22:50:09.313359   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.313367   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:09.313372   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:09.313419   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:09.351160   57719 cri.go:89] found id: ""
	I0410 22:50:09.351195   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.351206   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:09.351225   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:09.351239   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:09.425989   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:09.426015   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:09.426033   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:09.505189   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:09.505223   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:09.549619   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:09.549651   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:09.604322   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:09.604360   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:08.520115   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:11.018253   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:08.649190   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:10.650453   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:10.903726   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:13.401154   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:12.119780   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:12.135377   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:12.135458   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:12.178105   57719 cri.go:89] found id: ""
	I0410 22:50:12.178129   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.178138   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:12.178144   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:12.178207   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:12.217369   57719 cri.go:89] found id: ""
	I0410 22:50:12.217397   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.217409   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:12.217424   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:12.217488   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:12.254185   57719 cri.go:89] found id: ""
	I0410 22:50:12.254213   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.254222   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:12.254230   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:12.254291   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:12.295007   57719 cri.go:89] found id: ""
	I0410 22:50:12.295038   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.295048   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:12.295057   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:12.295125   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:12.334620   57719 cri.go:89] found id: ""
	I0410 22:50:12.334644   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.334651   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:12.334657   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:12.334707   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:12.371217   57719 cri.go:89] found id: ""
	I0410 22:50:12.371241   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.371249   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:12.371255   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:12.371302   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:12.409571   57719 cri.go:89] found id: ""
	I0410 22:50:12.409599   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.409608   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:12.409617   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:12.409675   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:12.453133   57719 cri.go:89] found id: ""
	I0410 22:50:12.453159   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.453169   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:12.453180   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:12.453194   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:12.505322   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:12.505360   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:12.520284   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:12.520315   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:12.608057   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:12.608082   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:12.608097   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:12.693240   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:12.693274   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:15.244628   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:15.261915   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:15.262020   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:15.302874   57719 cri.go:89] found id: ""
	I0410 22:50:15.302903   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.302910   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:15.302916   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:15.302973   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:15.347492   57719 cri.go:89] found id: ""
	I0410 22:50:15.347518   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.347527   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:15.347534   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:15.347598   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:15.394156   57719 cri.go:89] found id: ""
	I0410 22:50:15.394188   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.394198   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:15.394205   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:15.394265   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:13.518316   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:15.520507   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:13.150145   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:15.651083   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:15.401582   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:17.901179   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:15.437656   57719 cri.go:89] found id: ""
	I0410 22:50:15.437682   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.437690   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:15.437695   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:15.437748   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:15.475658   57719 cri.go:89] found id: ""
	I0410 22:50:15.475686   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.475697   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:15.475704   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:15.475765   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:15.517908   57719 cri.go:89] found id: ""
	I0410 22:50:15.517930   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.517937   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:15.517942   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:15.517991   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:15.560083   57719 cri.go:89] found id: ""
	I0410 22:50:15.560108   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.560117   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:15.560123   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:15.560178   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:15.603967   57719 cri.go:89] found id: ""
	I0410 22:50:15.603994   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.604002   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:15.604013   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:15.604028   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:15.659994   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:15.660029   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:15.675627   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:15.675658   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:15.761297   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:15.761320   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:15.761339   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:15.839225   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:15.839265   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:18.386062   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:18.399609   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:18.399677   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:18.443002   57719 cri.go:89] found id: ""
	I0410 22:50:18.443030   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.443040   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:18.443048   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:18.443106   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:18.485089   57719 cri.go:89] found id: ""
	I0410 22:50:18.485121   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.485132   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:18.485140   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:18.485200   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:18.524310   57719 cri.go:89] found id: ""
	I0410 22:50:18.524338   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.524347   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:18.524354   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:18.524412   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:18.563535   57719 cri.go:89] found id: ""
	I0410 22:50:18.563573   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.563582   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:18.563587   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:18.563634   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:18.600451   57719 cri.go:89] found id: ""
	I0410 22:50:18.600478   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.600487   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:18.600495   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:18.600562   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:18.640445   57719 cri.go:89] found id: ""
	I0410 22:50:18.640472   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.640480   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:18.640485   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:18.640550   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:18.677691   57719 cri.go:89] found id: ""
	I0410 22:50:18.677725   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.677746   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:18.677754   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:18.677817   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:18.716753   57719 cri.go:89] found id: ""
	I0410 22:50:18.716850   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.716876   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:18.716897   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:18.716918   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:18.804099   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:18.804130   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:18.804144   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:18.883569   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:18.883611   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:18.930014   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:18.930045   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:18.980029   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:18.980065   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:18.018924   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:20.020820   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:18.151029   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:20.650000   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:19.904069   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:22.401462   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:24.401892   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:21.495499   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:21.511001   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:21.511075   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:21.551469   57719 cri.go:89] found id: ""
	I0410 22:50:21.551511   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.551522   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:21.551540   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:21.551605   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:21.590539   57719 cri.go:89] found id: ""
	I0410 22:50:21.590570   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.590580   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:21.590587   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:21.590654   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:21.629005   57719 cri.go:89] found id: ""
	I0410 22:50:21.629030   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.629042   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:21.629048   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:21.629108   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:21.669745   57719 cri.go:89] found id: ""
	I0410 22:50:21.669767   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.669774   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:21.669780   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:21.669834   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:21.707806   57719 cri.go:89] found id: ""
	I0410 22:50:21.707831   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.707839   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:21.707844   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:21.707892   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:21.746698   57719 cri.go:89] found id: ""
	I0410 22:50:21.746727   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.746736   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:21.746742   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:21.746802   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:21.783048   57719 cri.go:89] found id: ""
	I0410 22:50:21.783070   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.783079   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:21.783084   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:21.783131   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:21.822457   57719 cri.go:89] found id: ""
	I0410 22:50:21.822484   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.822492   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:21.822500   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:21.822513   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:21.894706   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:21.894747   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:21.909861   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:21.909903   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:21.999344   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:21.999370   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:21.999386   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:22.080004   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:22.080042   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:24.620924   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:24.634937   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:24.634999   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:24.686619   57719 cri.go:89] found id: ""
	I0410 22:50:24.686644   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.686655   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:24.686662   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:24.686744   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:24.723632   57719 cri.go:89] found id: ""
	I0410 22:50:24.723658   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.723667   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:24.723675   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:24.723738   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:24.760708   57719 cri.go:89] found id: ""
	I0410 22:50:24.760739   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.760750   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:24.760757   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:24.760804   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:24.795680   57719 cri.go:89] found id: ""
	I0410 22:50:24.795712   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.795722   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:24.795729   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:24.795793   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:24.833033   57719 cri.go:89] found id: ""
	I0410 22:50:24.833063   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.833074   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:24.833082   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:24.833130   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:24.872840   57719 cri.go:89] found id: ""
	I0410 22:50:24.872864   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.872871   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:24.872877   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:24.872936   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:24.915640   57719 cri.go:89] found id: ""
	I0410 22:50:24.915678   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.915688   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:24.915696   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:24.915755   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:24.957164   57719 cri.go:89] found id: ""
	I0410 22:50:24.957207   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.957219   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:24.957230   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:24.957244   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:25.006551   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:25.006601   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:25.021623   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:25.021649   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:25.094699   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:25.094722   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:25.094741   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:25.181280   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:25.181316   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:22.518442   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:25.018206   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:22.650481   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:25.151162   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:26.904127   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:29.400642   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:27.723475   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:27.737294   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:27.737381   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:27.776098   57719 cri.go:89] found id: ""
	I0410 22:50:27.776126   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.776138   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:27.776146   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:27.776203   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:27.814324   57719 cri.go:89] found id: ""
	I0410 22:50:27.814352   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.814364   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:27.814371   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:27.814447   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:27.849573   57719 cri.go:89] found id: ""
	I0410 22:50:27.849603   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.849614   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:27.849621   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:27.849682   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:27.888904   57719 cri.go:89] found id: ""
	I0410 22:50:27.888932   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.888940   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:27.888946   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:27.888993   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:27.931772   57719 cri.go:89] found id: ""
	I0410 22:50:27.931800   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.931812   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:27.931821   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:27.931881   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:27.975633   57719 cri.go:89] found id: ""
	I0410 22:50:27.975666   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.975676   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:27.975684   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:27.975736   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:28.012251   57719 cri.go:89] found id: ""
	I0410 22:50:28.012280   57719 logs.go:276] 0 containers: []
	W0410 22:50:28.012290   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:28.012298   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:28.012364   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:28.048848   57719 cri.go:89] found id: ""
	I0410 22:50:28.048886   57719 logs.go:276] 0 containers: []
	W0410 22:50:28.048898   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:28.048908   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:28.048923   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:28.102215   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:28.102257   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:28.118052   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:28.118081   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:28.190738   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:28.190762   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:28.190777   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:28.269294   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:28.269330   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:27.519211   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:29.521111   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:32.017915   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:27.651922   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:30.150852   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:31.401210   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:33.902054   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:30.833927   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:30.848196   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:30.848266   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:30.886077   57719 cri.go:89] found id: ""
	I0410 22:50:30.886117   57719 logs.go:276] 0 containers: []
	W0410 22:50:30.886127   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:30.886133   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:30.886179   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:30.924638   57719 cri.go:89] found id: ""
	I0410 22:50:30.924668   57719 logs.go:276] 0 containers: []
	W0410 22:50:30.924678   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:30.924686   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:30.924762   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:30.961106   57719 cri.go:89] found id: ""
	I0410 22:50:30.961136   57719 logs.go:276] 0 containers: []
	W0410 22:50:30.961147   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:30.961154   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:30.961213   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:31.001374   57719 cri.go:89] found id: ""
	I0410 22:50:31.001412   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.001427   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:31.001434   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:31.001498   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:31.038928   57719 cri.go:89] found id: ""
	I0410 22:50:31.038961   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.038971   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:31.038980   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:31.039057   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:31.077033   57719 cri.go:89] found id: ""
	I0410 22:50:31.077067   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.077076   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:31.077083   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:31.077139   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:31.115227   57719 cri.go:89] found id: ""
	I0410 22:50:31.115257   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.115266   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:31.115273   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:31.115335   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:31.157339   57719 cri.go:89] found id: ""
	I0410 22:50:31.157372   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.157382   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:31.157393   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:31.157409   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:31.198742   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:31.198770   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:31.255388   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:31.255422   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:31.272018   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:31.272048   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:31.344503   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:31.344524   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:31.344541   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:33.925749   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:33.939402   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:33.939475   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:33.976070   57719 cri.go:89] found id: ""
	I0410 22:50:33.976093   57719 logs.go:276] 0 containers: []
	W0410 22:50:33.976100   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:33.976106   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:33.976172   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:34.013723   57719 cri.go:89] found id: ""
	I0410 22:50:34.013748   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.013758   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:34.013765   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:34.013821   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:34.062678   57719 cri.go:89] found id: ""
	I0410 22:50:34.062704   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.062712   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:34.062718   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:34.062774   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:34.123007   57719 cri.go:89] found id: ""
	I0410 22:50:34.123038   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.123046   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:34.123052   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:34.123096   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:34.188811   57719 cri.go:89] found id: ""
	I0410 22:50:34.188841   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.188852   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:34.188859   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:34.188949   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:34.223585   57719 cri.go:89] found id: ""
	I0410 22:50:34.223609   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.223618   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:34.223625   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:34.223680   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:34.260004   57719 cri.go:89] found id: ""
	I0410 22:50:34.260028   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.260036   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:34.260041   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:34.260096   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:34.303064   57719 cri.go:89] found id: ""
	I0410 22:50:34.303093   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.303104   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:34.303115   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:34.303134   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:34.359105   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:34.359142   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:34.375420   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:34.375450   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:34.449619   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:34.449645   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:34.449660   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:34.534214   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:34.534248   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:34.518609   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:37.016973   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:32.649917   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:34.661652   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:37.150648   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:36.401988   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:38.901505   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:37.076525   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:37.090789   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:37.090849   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:37.130848   57719 cri.go:89] found id: ""
	I0410 22:50:37.130881   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.130893   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:37.130900   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:37.130967   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:37.170158   57719 cri.go:89] found id: ""
	I0410 22:50:37.170181   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.170188   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:37.170194   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:37.170269   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:37.210238   57719 cri.go:89] found id: ""
	I0410 22:50:37.210264   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.210274   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:37.210282   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:37.210328   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:37.256763   57719 cri.go:89] found id: ""
	I0410 22:50:37.256789   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.256800   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:37.256807   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:37.256875   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:37.295323   57719 cri.go:89] found id: ""
	I0410 22:50:37.295355   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.295364   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:37.295372   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:37.295443   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:37.334066   57719 cri.go:89] found id: ""
	I0410 22:50:37.334094   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.334105   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:37.334113   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:37.334170   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:37.374428   57719 cri.go:89] found id: ""
	I0410 22:50:37.374458   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.374477   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:37.374485   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:37.374544   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:37.412114   57719 cri.go:89] found id: ""
	I0410 22:50:37.412142   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.412152   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:37.412161   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:37.412174   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:37.453693   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:37.453717   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:37.505484   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:37.505524   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:37.523645   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:37.523672   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:37.595107   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:37.595134   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:37.595150   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:40.180649   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:40.195168   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:40.195243   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:40.240130   57719 cri.go:89] found id: ""
	I0410 22:50:40.240160   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.240169   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:40.240175   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:40.240241   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:40.276366   57719 cri.go:89] found id: ""
	I0410 22:50:40.276390   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.276406   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:40.276412   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:40.276466   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:40.314991   57719 cri.go:89] found id: ""
	I0410 22:50:40.315016   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.315023   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:40.315029   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:40.315075   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:40.354301   57719 cri.go:89] found id: ""
	I0410 22:50:40.354331   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.354342   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:40.354349   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:40.354414   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:40.393093   57719 cri.go:89] found id: ""
	I0410 22:50:40.393125   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.393135   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:40.393143   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:40.393204   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:39.021170   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:41.518285   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:39.650047   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:42.150206   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:40.902024   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:42.904180   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:40.429641   57719 cri.go:89] found id: ""
	I0410 22:50:40.429665   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.429674   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:40.429680   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:40.429727   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:40.468184   57719 cri.go:89] found id: ""
	I0410 22:50:40.468213   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.468224   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:40.468232   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:40.468304   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:40.505586   57719 cri.go:89] found id: ""
	I0410 22:50:40.505616   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.505627   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:40.505637   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:40.505652   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:40.562078   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:40.562119   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:40.578135   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:40.578213   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:40.659018   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:40.659047   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:40.659061   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:40.746434   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:40.746478   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:43.287852   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:43.301797   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:43.301869   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:43.339778   57719 cri.go:89] found id: ""
	I0410 22:50:43.339813   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.339822   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:43.339829   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:43.339893   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:43.378716   57719 cri.go:89] found id: ""
	I0410 22:50:43.378748   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.378759   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:43.378767   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:43.378836   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:43.417128   57719 cri.go:89] found id: ""
	I0410 22:50:43.417152   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.417163   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:43.417171   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:43.417234   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:43.459577   57719 cri.go:89] found id: ""
	I0410 22:50:43.459608   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.459617   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:43.459623   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:43.459678   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:43.497519   57719 cri.go:89] found id: ""
	I0410 22:50:43.497551   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.497561   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:43.497566   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:43.497620   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:43.534400   57719 cri.go:89] found id: ""
	I0410 22:50:43.534433   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.534444   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:43.534451   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:43.534540   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:43.574213   57719 cri.go:89] found id: ""
	I0410 22:50:43.574242   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.574253   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:43.574283   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:43.574344   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:43.611078   57719 cri.go:89] found id: ""
	I0410 22:50:43.611106   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.611113   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:43.611121   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:43.611137   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:43.698166   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:43.698202   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:43.749368   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:43.749395   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:43.801584   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:43.801621   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:43.817012   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:43.817050   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:43.892325   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:43.518660   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:46.017804   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:44.650389   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:46.650560   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:45.401723   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:47.901852   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:46.393325   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:46.407985   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:46.408045   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:46.442704   57719 cri.go:89] found id: ""
	I0410 22:50:46.442735   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.442745   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:46.442753   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:46.442821   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:46.485582   57719 cri.go:89] found id: ""
	I0410 22:50:46.485611   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.485618   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:46.485625   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:46.485683   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:46.524199   57719 cri.go:89] found id: ""
	I0410 22:50:46.524227   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.524234   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:46.524240   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:46.524288   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:46.560655   57719 cri.go:89] found id: ""
	I0410 22:50:46.560685   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.560694   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:46.560701   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:46.560839   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:46.596617   57719 cri.go:89] found id: ""
	I0410 22:50:46.596646   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.596658   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:46.596666   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:46.596739   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:46.634316   57719 cri.go:89] found id: ""
	I0410 22:50:46.634339   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.634347   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:46.634352   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:46.634399   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:46.671466   57719 cri.go:89] found id: ""
	I0410 22:50:46.671493   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.671502   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:46.671509   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:46.671582   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:46.709228   57719 cri.go:89] found id: ""
	I0410 22:50:46.709254   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.709265   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:46.709275   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:46.709291   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:46.761329   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:46.761366   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:46.778265   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:46.778288   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:46.851092   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:46.851113   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:46.851125   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:46.929181   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:46.929223   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:49.471285   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:49.485474   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:49.485551   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:49.523799   57719 cri.go:89] found id: ""
	I0410 22:50:49.523826   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.523838   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:49.523846   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:49.523899   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:49.562102   57719 cri.go:89] found id: ""
	I0410 22:50:49.562129   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.562137   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:49.562143   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:49.562196   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:49.600182   57719 cri.go:89] found id: ""
	I0410 22:50:49.600204   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.600211   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:49.600216   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:49.600262   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:49.640002   57719 cri.go:89] found id: ""
	I0410 22:50:49.640028   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.640039   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:49.640047   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:49.640111   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:49.678815   57719 cri.go:89] found id: ""
	I0410 22:50:49.678847   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.678858   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:49.678866   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:49.678929   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:49.716933   57719 cri.go:89] found id: ""
	I0410 22:50:49.716959   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.716969   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:49.716976   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:49.717039   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:49.756018   57719 cri.go:89] found id: ""
	I0410 22:50:49.756050   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.756060   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:49.756068   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:49.756132   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:49.802066   57719 cri.go:89] found id: ""
	I0410 22:50:49.802094   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.802103   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:49.802110   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:49.802123   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:49.856363   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:49.856417   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:49.872297   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:49.872330   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:49.950152   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:49.950174   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:49.950185   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:50.031251   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:50.031291   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:48.517547   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:50.517942   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:49.150498   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:51.151491   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:50.401650   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:52.401866   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:52.574794   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:52.589052   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:52.589117   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:52.625911   57719 cri.go:89] found id: ""
	I0410 22:50:52.625941   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.625952   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:52.625960   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:52.626020   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:52.668749   57719 cri.go:89] found id: ""
	I0410 22:50:52.668773   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.668781   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:52.668787   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:52.668835   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:52.713420   57719 cri.go:89] found id: ""
	I0410 22:50:52.713447   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.713457   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:52.713473   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:52.713538   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:52.750265   57719 cri.go:89] found id: ""
	I0410 22:50:52.750294   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.750301   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:52.750307   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:52.750354   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:52.787552   57719 cri.go:89] found id: ""
	I0410 22:50:52.787586   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.787597   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:52.787604   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:52.787670   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:52.827988   57719 cri.go:89] found id: ""
	I0410 22:50:52.828013   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.828020   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:52.828026   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:52.828072   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:52.864115   57719 cri.go:89] found id: ""
	I0410 22:50:52.864144   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.864155   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:52.864161   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:52.864222   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:52.906673   57719 cri.go:89] found id: ""
	I0410 22:50:52.906702   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.906712   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:52.906723   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:52.906742   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:52.960842   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:52.960892   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:52.976084   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:52.976114   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:53.052612   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:53.052638   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:53.052656   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:53.132465   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:53.132518   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:53.018789   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:55.518169   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:53.154117   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:55.653267   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:54.903797   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:57.401445   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:55.676947   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:55.691098   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:55.691183   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:55.728711   57719 cri.go:89] found id: ""
	I0410 22:50:55.728740   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.728750   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:55.728758   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:55.728824   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:55.768540   57719 cri.go:89] found id: ""
	I0410 22:50:55.768568   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.768578   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:55.768584   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:55.768649   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:55.806901   57719 cri.go:89] found id: ""
	I0410 22:50:55.806928   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.806938   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:55.806945   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:55.807019   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:55.846777   57719 cri.go:89] found id: ""
	I0410 22:50:55.846807   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.846816   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:55.846822   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:55.846873   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:55.887143   57719 cri.go:89] found id: ""
	I0410 22:50:55.887172   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.887181   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:55.887186   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:55.887241   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:55.929008   57719 cri.go:89] found id: ""
	I0410 22:50:55.929032   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.929040   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:55.929046   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:55.929098   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:55.969496   57719 cri.go:89] found id: ""
	I0410 22:50:55.969526   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.969536   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:55.969544   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:55.969605   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:56.007786   57719 cri.go:89] found id: ""
	I0410 22:50:56.007818   57719 logs.go:276] 0 containers: []
	W0410 22:50:56.007828   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:56.007838   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:56.007854   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:56.061616   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:56.061653   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:56.078664   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:56.078689   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:56.165015   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:56.165037   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:56.165053   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:56.241928   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:56.241971   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:58.785955   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:58.799544   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:58.799604   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:58.837234   57719 cri.go:89] found id: ""
	I0410 22:50:58.837264   57719 logs.go:276] 0 containers: []
	W0410 22:50:58.837275   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:58.837283   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:58.837350   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:58.877818   57719 cri.go:89] found id: ""
	I0410 22:50:58.877854   57719 logs.go:276] 0 containers: []
	W0410 22:50:58.877861   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:58.877867   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:58.877921   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:58.919705   57719 cri.go:89] found id: ""
	I0410 22:50:58.919729   57719 logs.go:276] 0 containers: []
	W0410 22:50:58.919740   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:58.919747   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:58.919809   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:58.957995   57719 cri.go:89] found id: ""
	I0410 22:50:58.958020   57719 logs.go:276] 0 containers: []
	W0410 22:50:58.958029   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:58.958036   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:58.958091   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:58.999966   57719 cri.go:89] found id: ""
	I0410 22:50:58.999995   57719 logs.go:276] 0 containers: []
	W0410 22:50:59.000008   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:59.000016   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:59.000088   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:59.040516   57719 cri.go:89] found id: ""
	I0410 22:50:59.040541   57719 logs.go:276] 0 containers: []
	W0410 22:50:59.040552   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:59.040560   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:59.040623   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:59.078869   57719 cri.go:89] found id: ""
	I0410 22:50:59.078899   57719 logs.go:276] 0 containers: []
	W0410 22:50:59.078908   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:59.078913   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:59.078961   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:59.116637   57719 cri.go:89] found id: ""
	I0410 22:50:59.116663   57719 logs.go:276] 0 containers: []
	W0410 22:50:59.116670   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:59.116679   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:59.116697   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:59.195852   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:59.195892   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:59.243256   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:59.243282   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:59.299195   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:59.299263   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:59.314512   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:59.314537   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:59.386468   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:58.016995   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:00.018205   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:58.151543   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:00.650140   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:59.901858   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:01.902933   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:04.402128   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:01.886907   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:01.905169   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:01.905251   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:01.944154   57719 cri.go:89] found id: ""
	I0410 22:51:01.944187   57719 logs.go:276] 0 containers: []
	W0410 22:51:01.944198   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:01.944205   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:01.944268   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:01.982743   57719 cri.go:89] found id: ""
	I0410 22:51:01.982778   57719 logs.go:276] 0 containers: []
	W0410 22:51:01.982789   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:01.982797   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:01.982864   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:02.020072   57719 cri.go:89] found id: ""
	I0410 22:51:02.020094   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.020102   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:02.020159   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:02.020213   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:02.064250   57719 cri.go:89] found id: ""
	I0410 22:51:02.064273   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.064280   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:02.064286   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:02.064339   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:02.105013   57719 cri.go:89] found id: ""
	I0410 22:51:02.105045   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.105054   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:02.105060   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:02.105106   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:02.145664   57719 cri.go:89] found id: ""
	I0410 22:51:02.145689   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.145695   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:02.145701   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:02.145759   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:02.189752   57719 cri.go:89] found id: ""
	I0410 22:51:02.189831   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.189850   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:02.189857   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:02.189929   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:02.228315   57719 cri.go:89] found id: ""
	I0410 22:51:02.228347   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.228358   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:02.228374   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:02.228390   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:02.281425   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:02.281460   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:02.296003   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:02.296031   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:02.389572   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:02.389599   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:02.389613   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:02.475881   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:02.475916   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:05.022037   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:05.037242   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:05.037304   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:05.073656   57719 cri.go:89] found id: ""
	I0410 22:51:05.073687   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.073698   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:05.073705   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:05.073767   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:05.114321   57719 cri.go:89] found id: ""
	I0410 22:51:05.114348   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.114356   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:05.114361   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:05.114430   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:05.153119   57719 cri.go:89] found id: ""
	I0410 22:51:05.153156   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.153164   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:05.153170   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:05.153230   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:05.193393   57719 cri.go:89] found id: ""
	I0410 22:51:05.193420   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.193428   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:05.193433   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:05.193479   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:05.229826   57719 cri.go:89] found id: ""
	I0410 22:51:05.229853   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.229861   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:05.229867   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:05.229915   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:05.265511   57719 cri.go:89] found id: ""
	I0410 22:51:05.265544   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.265555   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:05.265562   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:05.265627   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:05.302257   57719 cri.go:89] found id: ""
	I0410 22:51:05.302287   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.302297   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:05.302305   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:05.302386   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:05.347344   57719 cri.go:89] found id: ""
	I0410 22:51:05.347372   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.347380   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:05.347388   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:05.347399   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:05.421796   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:05.421817   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:05.421829   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:02.521499   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:05.017660   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:07.017945   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:02.651104   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:05.150286   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:07.150565   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:06.402266   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:08.406456   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:05.501803   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:05.501839   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:05.549161   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:05.549195   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:05.599598   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:05.599633   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:08.115679   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:08.130273   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:08.130350   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:08.172302   57719 cri.go:89] found id: ""
	I0410 22:51:08.172328   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.172335   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:08.172342   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:08.172390   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:08.220789   57719 cri.go:89] found id: ""
	I0410 22:51:08.220812   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.220819   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:08.220825   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:08.220874   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:08.258299   57719 cri.go:89] found id: ""
	I0410 22:51:08.258328   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.258341   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:08.258349   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:08.258404   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:08.297698   57719 cri.go:89] found id: ""
	I0410 22:51:08.297726   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.297733   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:08.297739   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:08.297787   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:08.335564   57719 cri.go:89] found id: ""
	I0410 22:51:08.335595   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.335605   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:08.335613   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:08.335671   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:08.373340   57719 cri.go:89] found id: ""
	I0410 22:51:08.373367   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.373377   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:08.373384   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:08.373481   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:08.413961   57719 cri.go:89] found id: ""
	I0410 22:51:08.413984   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.413993   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:08.414001   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:08.414062   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:08.459449   57719 cri.go:89] found id: ""
	I0410 22:51:08.459481   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.459492   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:08.459505   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:08.459521   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:08.518061   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:08.518103   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:08.533653   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:08.533680   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:08.619882   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:08.619917   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:08.619932   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:08.696329   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:08.696364   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:09.518298   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:11.518877   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:09.650387   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:11.650614   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:10.902634   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:13.402009   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:11.256846   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:11.271521   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:11.271582   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:11.312829   57719 cri.go:89] found id: ""
	I0410 22:51:11.312851   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.312869   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:11.312876   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:11.312930   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:11.355183   57719 cri.go:89] found id: ""
	I0410 22:51:11.355210   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.355220   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:11.355227   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:11.355287   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:11.394345   57719 cri.go:89] found id: ""
	I0410 22:51:11.394376   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.394388   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:11.394396   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:11.394460   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:11.434128   57719 cri.go:89] found id: ""
	I0410 22:51:11.434155   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.434163   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:11.434169   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:11.434219   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:11.473160   57719 cri.go:89] found id: ""
	I0410 22:51:11.473189   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.473201   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:11.473208   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:11.473278   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:11.513782   57719 cri.go:89] found id: ""
	I0410 22:51:11.513815   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.513826   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:11.513835   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:11.513891   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:11.556057   57719 cri.go:89] found id: ""
	I0410 22:51:11.556085   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.556093   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:11.556100   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:11.556147   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:11.594557   57719 cri.go:89] found id: ""
	I0410 22:51:11.594579   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.594586   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:11.594594   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:11.594609   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:11.672795   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:11.672841   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:11.716011   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:11.716046   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:11.769372   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:11.769413   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:11.784589   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:11.784617   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:11.857051   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:14.358019   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:14.372116   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:14.372192   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:14.412020   57719 cri.go:89] found id: ""
	I0410 22:51:14.412049   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.412061   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:14.412068   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:14.412128   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:14.450317   57719 cri.go:89] found id: ""
	I0410 22:51:14.450349   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.450360   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:14.450368   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:14.450426   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:14.509080   57719 cri.go:89] found id: ""
	I0410 22:51:14.509104   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.509110   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:14.509116   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:14.509185   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:14.561540   57719 cri.go:89] found id: ""
	I0410 22:51:14.561572   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.561583   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:14.561590   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:14.561670   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:14.622498   57719 cri.go:89] found id: ""
	I0410 22:51:14.622528   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.622538   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:14.622546   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:14.622606   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:14.678451   57719 cri.go:89] found id: ""
	I0410 22:51:14.678481   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.678490   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:14.678498   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:14.678560   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:14.720264   57719 cri.go:89] found id: ""
	I0410 22:51:14.720302   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.720315   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:14.720323   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:14.720388   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:14.758039   57719 cri.go:89] found id: ""
	I0410 22:51:14.758063   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.758071   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:14.758079   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:14.758090   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:14.808111   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:14.808171   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:14.825444   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:14.825487   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:14.906859   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:14.906884   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:14.906899   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:14.995176   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:14.995225   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:14.017397   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:16.017624   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:14.149898   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:16.150320   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:15.901542   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:17.902391   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:17.541159   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:17.556679   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:17.556749   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:17.595839   57719 cri.go:89] found id: ""
	I0410 22:51:17.595869   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.595880   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:17.595895   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:17.595954   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:17.633921   57719 cri.go:89] found id: ""
	I0410 22:51:17.633947   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.633957   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:17.633964   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:17.634033   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:17.673467   57719 cri.go:89] found id: ""
	I0410 22:51:17.673493   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.673501   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:17.673507   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:17.673554   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:17.709631   57719 cri.go:89] found id: ""
	I0410 22:51:17.709660   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.709670   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:17.709679   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:17.709739   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:17.760852   57719 cri.go:89] found id: ""
	I0410 22:51:17.760880   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.760893   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:17.760908   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:17.760969   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:17.798074   57719 cri.go:89] found id: ""
	I0410 22:51:17.798099   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.798108   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:17.798117   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:17.798178   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:17.835807   57719 cri.go:89] found id: ""
	I0410 22:51:17.835839   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.835854   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:17.835863   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:17.835935   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:17.876812   57719 cri.go:89] found id: ""
	I0410 22:51:17.876846   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.876856   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:17.876868   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:17.876882   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:17.891121   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:17.891149   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:17.966241   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:17.966264   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:17.966277   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:18.042633   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:18.042667   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:18.088294   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:18.088327   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:18.518103   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:20.519397   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:18.650784   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:21.150770   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:20.403127   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:22.901329   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:20.647016   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:20.662573   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:20.662640   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:20.701147   57719 cri.go:89] found id: ""
	I0410 22:51:20.701173   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.701184   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:20.701191   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:20.701252   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:20.739005   57719 cri.go:89] found id: ""
	I0410 22:51:20.739038   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.739049   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:20.739057   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:20.739112   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:20.776335   57719 cri.go:89] found id: ""
	I0410 22:51:20.776365   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.776379   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:20.776386   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:20.776471   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:20.814755   57719 cri.go:89] found id: ""
	I0410 22:51:20.814789   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.814800   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:20.814808   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:20.814867   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:20.853872   57719 cri.go:89] found id: ""
	I0410 22:51:20.853897   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.853904   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:20.853910   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:20.853958   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:20.891616   57719 cri.go:89] found id: ""
	I0410 22:51:20.891648   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.891656   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:20.891662   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:20.891710   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:20.930285   57719 cri.go:89] found id: ""
	I0410 22:51:20.930316   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.930326   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:20.930341   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:20.930398   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:20.967857   57719 cri.go:89] found id: ""
	I0410 22:51:20.967894   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.967904   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:20.967913   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:20.967934   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:21.053166   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:21.053201   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:21.098860   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:21.098888   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:21.150395   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:21.150430   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:21.164707   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:21.164737   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:21.251010   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:23.751441   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:23.769949   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:23.770014   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:23.809652   57719 cri.go:89] found id: ""
	I0410 22:51:23.809678   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.809686   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:23.809692   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:23.809740   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:23.847331   57719 cri.go:89] found id: ""
	I0410 22:51:23.847364   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.847374   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:23.847383   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:23.847445   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:23.889459   57719 cri.go:89] found id: ""
	I0410 22:51:23.889488   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.889498   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:23.889505   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:23.889564   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:23.932683   57719 cri.go:89] found id: ""
	I0410 22:51:23.932712   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.932720   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:23.932727   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:23.932787   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:23.974161   57719 cri.go:89] found id: ""
	I0410 22:51:23.974187   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.974194   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:23.974200   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:23.974253   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:24.013058   57719 cri.go:89] found id: ""
	I0410 22:51:24.013087   57719 logs.go:276] 0 containers: []
	W0410 22:51:24.013098   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:24.013106   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:24.013169   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:24.052556   57719 cri.go:89] found id: ""
	I0410 22:51:24.052582   57719 logs.go:276] 0 containers: []
	W0410 22:51:24.052590   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:24.052596   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:24.052643   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:24.089940   57719 cri.go:89] found id: ""
	I0410 22:51:24.089967   57719 logs.go:276] 0 containers: []
	W0410 22:51:24.089974   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:24.089982   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:24.089992   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:24.133198   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:24.133226   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:24.186615   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:24.186651   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:24.200559   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:24.200586   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:24.277061   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:24.277093   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:24.277109   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:23.016887   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:25.018325   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:27.018514   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:23.650669   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:26.149198   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:24.901704   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:26.902227   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:28.902337   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:26.855354   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:26.870269   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:26.870329   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:26.910056   57719 cri.go:89] found id: ""
	I0410 22:51:26.910084   57719 logs.go:276] 0 containers: []
	W0410 22:51:26.910094   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:26.910101   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:26.910163   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:26.949646   57719 cri.go:89] found id: ""
	I0410 22:51:26.949674   57719 logs.go:276] 0 containers: []
	W0410 22:51:26.949684   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:26.949690   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:26.949759   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:26.990945   57719 cri.go:89] found id: ""
	I0410 22:51:26.990970   57719 logs.go:276] 0 containers: []
	W0410 22:51:26.990977   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:26.990984   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:26.991053   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:27.029464   57719 cri.go:89] found id: ""
	I0410 22:51:27.029491   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.029500   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:27.029505   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:27.029562   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:27.072194   57719 cri.go:89] found id: ""
	I0410 22:51:27.072235   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.072260   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:27.072270   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:27.072339   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:27.106942   57719 cri.go:89] found id: ""
	I0410 22:51:27.106969   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.106979   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:27.106985   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:27.107045   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:27.144851   57719 cri.go:89] found id: ""
	I0410 22:51:27.144885   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.144894   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:27.144909   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:27.144970   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:27.188138   57719 cri.go:89] found id: ""
	I0410 22:51:27.188166   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.188178   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:27.188189   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:27.188204   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:27.241911   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:27.241943   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:27.255296   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:27.255322   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:27.327638   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:27.327663   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:27.327678   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:27.409048   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:27.409083   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:29.960093   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:29.975583   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:29.975647   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:30.018120   57719 cri.go:89] found id: ""
	I0410 22:51:30.018149   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.018159   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:30.018166   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:30.018225   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:30.055487   57719 cri.go:89] found id: ""
	I0410 22:51:30.055511   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.055518   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:30.055524   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:30.055573   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:30.093723   57719 cri.go:89] found id: ""
	I0410 22:51:30.093749   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.093756   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:30.093761   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:30.093808   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:30.138278   57719 cri.go:89] found id: ""
	I0410 22:51:30.138306   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.138317   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:30.138324   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:30.138385   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:30.174454   57719 cri.go:89] found id: ""
	I0410 22:51:30.174484   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.174495   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:30.174502   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:30.174573   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:30.213189   57719 cri.go:89] found id: ""
	I0410 22:51:30.213214   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.213221   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:30.213227   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:30.213272   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:30.253264   57719 cri.go:89] found id: ""
	I0410 22:51:30.253294   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.253304   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:30.253309   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:30.253357   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:30.289729   57719 cri.go:89] found id: ""
	I0410 22:51:30.289755   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.289767   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:30.289777   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:30.289793   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:30.303387   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:30.303416   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:30.381294   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:30.381315   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:30.381331   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:29.019226   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:31.519681   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:28.150621   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:30.649807   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:30.903662   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:33.401827   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:30.468072   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:30.468110   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:30.508761   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:30.508794   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:33.061654   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:33.077072   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:33.077146   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:33.113753   57719 cri.go:89] found id: ""
	I0410 22:51:33.113781   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.113791   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:33.113798   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:33.113848   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:33.149212   57719 cri.go:89] found id: ""
	I0410 22:51:33.149238   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.149249   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:33.149256   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:33.149321   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:33.185619   57719 cri.go:89] found id: ""
	I0410 22:51:33.185649   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.185659   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:33.185667   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:33.185725   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:33.222270   57719 cri.go:89] found id: ""
	I0410 22:51:33.222301   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.222313   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:33.222320   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:33.222375   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:33.258594   57719 cri.go:89] found id: ""
	I0410 22:51:33.258624   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.258636   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:33.258642   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:33.258689   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:33.298326   57719 cri.go:89] found id: ""
	I0410 22:51:33.298360   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.298368   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:33.298374   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:33.298438   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:33.337407   57719 cri.go:89] found id: ""
	I0410 22:51:33.337438   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.337449   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:33.337456   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:33.337520   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:33.374971   57719 cri.go:89] found id: ""
	I0410 22:51:33.375003   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.375014   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:33.375024   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:33.375039   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:33.415256   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:33.415288   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:33.467895   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:33.467929   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:33.484604   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:33.484639   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:33.562267   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:33.562288   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:33.562299   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:34.017685   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:36.519093   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:32.650396   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:35.150200   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:35.902810   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:38.401463   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:36.142628   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:36.157825   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:36.157883   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:36.199418   57719 cri.go:89] found id: ""
	I0410 22:51:36.199446   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.199456   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:36.199463   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:36.199523   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:36.238136   57719 cri.go:89] found id: ""
	I0410 22:51:36.238166   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.238174   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:36.238180   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:36.238229   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:36.273995   57719 cri.go:89] found id: ""
	I0410 22:51:36.274026   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.274037   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:36.274049   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:36.274110   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:36.311007   57719 cri.go:89] found id: ""
	I0410 22:51:36.311039   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.311049   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:36.311057   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:36.311122   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:36.351062   57719 cri.go:89] found id: ""
	I0410 22:51:36.351086   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.351093   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:36.351099   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:36.351152   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:36.388660   57719 cri.go:89] found id: ""
	I0410 22:51:36.388689   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.388703   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:36.388711   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:36.388762   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:36.428715   57719 cri.go:89] found id: ""
	I0410 22:51:36.428753   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.428761   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:36.428767   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:36.428831   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:36.467186   57719 cri.go:89] found id: ""
	I0410 22:51:36.467213   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.467220   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:36.467228   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:36.467239   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:36.521831   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:36.521860   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:36.536929   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:36.536957   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:36.614624   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:36.614647   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:36.614659   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:36.694604   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:36.694646   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:39.240039   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:39.255177   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:39.255262   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:39.293063   57719 cri.go:89] found id: ""
	I0410 22:51:39.293091   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.293113   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:39.293120   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:39.293181   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:39.331603   57719 cri.go:89] found id: ""
	I0410 22:51:39.331631   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.331639   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:39.331645   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:39.331697   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:39.372881   57719 cri.go:89] found id: ""
	I0410 22:51:39.372908   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.372919   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:39.372926   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:39.372987   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:39.417399   57719 cri.go:89] found id: ""
	I0410 22:51:39.417425   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.417435   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:39.417442   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:39.417503   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:39.458836   57719 cri.go:89] found id: ""
	I0410 22:51:39.458868   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.458877   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:39.458882   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:39.458932   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:39.496436   57719 cri.go:89] found id: ""
	I0410 22:51:39.496460   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.496467   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:39.496474   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:39.496532   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:39.534649   57719 cri.go:89] found id: ""
	I0410 22:51:39.534681   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.534690   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:39.534695   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:39.534754   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:39.571677   57719 cri.go:89] found id: ""
	I0410 22:51:39.571698   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.571705   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:39.571714   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:39.571725   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:39.621445   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:39.621482   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:39.676341   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:39.676382   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:39.691543   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:39.691573   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:39.769452   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:39.769477   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:39.769493   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:39.017483   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:41.020027   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:37.651534   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:40.151404   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:40.401635   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:42.401931   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:44.401972   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:42.350823   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:42.367124   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:42.367199   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:42.407511   57719 cri.go:89] found id: ""
	I0410 22:51:42.407545   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.407554   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:42.407560   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:42.407622   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:42.442913   57719 cri.go:89] found id: ""
	I0410 22:51:42.442948   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.442958   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:42.442964   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:42.443027   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:42.480747   57719 cri.go:89] found id: ""
	I0410 22:51:42.480777   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.480786   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:42.480792   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:42.480846   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:42.521610   57719 cri.go:89] found id: ""
	I0410 22:51:42.521635   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.521644   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:42.521651   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:42.521698   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:42.561076   57719 cri.go:89] found id: ""
	I0410 22:51:42.561108   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.561119   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:42.561127   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:42.561189   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:42.598034   57719 cri.go:89] found id: ""
	I0410 22:51:42.598059   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.598066   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:42.598072   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:42.598129   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:42.637051   57719 cri.go:89] found id: ""
	I0410 22:51:42.637085   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.637095   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:42.637103   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:42.637162   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:42.676051   57719 cri.go:89] found id: ""
	I0410 22:51:42.676084   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.676094   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:42.676105   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:42.676120   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:42.719607   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:42.719634   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:42.770791   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:42.770829   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:42.785704   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:42.785730   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:42.876445   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:42.876475   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:42.876490   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:43.518453   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:46.019450   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:42.650486   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:44.650894   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:47.150370   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:46.901358   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:48.902417   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:45.458721   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:45.474125   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:45.474203   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:45.511105   57719 cri.go:89] found id: ""
	I0410 22:51:45.511143   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.511153   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:45.511161   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:45.511220   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:45.552891   57719 cri.go:89] found id: ""
	I0410 22:51:45.552916   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.552924   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:45.552930   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:45.552986   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:45.592423   57719 cri.go:89] found id: ""
	I0410 22:51:45.592458   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.592474   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:45.592481   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:45.592542   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:45.630964   57719 cri.go:89] found id: ""
	I0410 22:51:45.631009   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.631026   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:45.631033   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:45.631098   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:45.669557   57719 cri.go:89] found id: ""
	I0410 22:51:45.669586   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.669595   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:45.669602   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:45.669702   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:45.706359   57719 cri.go:89] found id: ""
	I0410 22:51:45.706387   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.706395   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:45.706402   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:45.706463   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:45.743301   57719 cri.go:89] found id: ""
	I0410 22:51:45.743330   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.743337   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:45.743343   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:45.743390   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:45.781679   57719 cri.go:89] found id: ""
	I0410 22:51:45.781703   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.781711   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:45.781718   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:45.781730   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:45.835251   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:45.835286   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:45.849255   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:45.849284   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:45.918404   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:45.918436   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:45.918452   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:45.999556   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:45.999591   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:48.546421   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:48.561243   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:48.561314   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:48.618335   57719 cri.go:89] found id: ""
	I0410 22:51:48.618361   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.618369   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:48.618375   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:48.618445   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:48.656116   57719 cri.go:89] found id: ""
	I0410 22:51:48.656151   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.656160   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:48.656167   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:48.656222   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:48.694846   57719 cri.go:89] found id: ""
	I0410 22:51:48.694874   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.694884   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:48.694897   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:48.694971   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:48.731988   57719 cri.go:89] found id: ""
	I0410 22:51:48.732020   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.732031   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:48.732039   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:48.732102   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:48.768595   57719 cri.go:89] found id: ""
	I0410 22:51:48.768627   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.768636   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:48.768643   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:48.768708   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:48.807263   57719 cri.go:89] found id: ""
	I0410 22:51:48.807292   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.807302   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:48.807308   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:48.807366   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:48.845291   57719 cri.go:89] found id: ""
	I0410 22:51:48.845317   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.845325   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:48.845329   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:48.845399   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:48.891056   57719 cri.go:89] found id: ""
	I0410 22:51:48.891081   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.891091   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:48.891102   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:48.891117   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:48.931963   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:48.931992   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:48.985539   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:48.985579   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:49.000685   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:49.000716   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:49.076097   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:49.076127   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:49.076143   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:48.517879   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:51.018479   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:49.150511   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:51.650519   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:51.400971   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:53.401596   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:51.663336   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:51.678249   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:51.678315   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:51.720062   57719 cri.go:89] found id: ""
	I0410 22:51:51.720088   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.720096   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:51.720103   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:51.720164   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:51.766351   57719 cri.go:89] found id: ""
	I0410 22:51:51.766387   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.766395   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:51.766401   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:51.766448   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:51.813037   57719 cri.go:89] found id: ""
	I0410 22:51:51.813068   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.813080   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:51.813087   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:51.813150   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:51.849232   57719 cri.go:89] found id: ""
	I0410 22:51:51.849262   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.849273   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:51.849280   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:51.849346   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:51.886392   57719 cri.go:89] found id: ""
	I0410 22:51:51.886415   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.886422   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:51.886428   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:51.886485   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:51.930859   57719 cri.go:89] found id: ""
	I0410 22:51:51.930896   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.930905   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:51.930913   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:51.930978   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:51.970403   57719 cri.go:89] found id: ""
	I0410 22:51:51.970501   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.970524   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:51.970533   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:51.970599   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:52.008281   57719 cri.go:89] found id: ""
	I0410 22:51:52.008311   57719 logs.go:276] 0 containers: []
	W0410 22:51:52.008322   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:52.008333   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:52.008347   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:52.060623   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:52.060656   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:52.075529   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:52.075559   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:52.158330   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:52.158356   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:52.158371   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:52.236356   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:52.236392   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:54.782448   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:54.796928   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:54.796997   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:54.836297   57719 cri.go:89] found id: ""
	I0410 22:51:54.836326   57719 logs.go:276] 0 containers: []
	W0410 22:51:54.836335   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:54.836341   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:54.836390   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:54.873501   57719 cri.go:89] found id: ""
	I0410 22:51:54.873532   57719 logs.go:276] 0 containers: []
	W0410 22:51:54.873540   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:54.873547   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:54.873617   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:54.914200   57719 cri.go:89] found id: ""
	I0410 22:51:54.914227   57719 logs.go:276] 0 containers: []
	W0410 22:51:54.914238   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:54.914247   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:54.914308   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:54.958654   57719 cri.go:89] found id: ""
	I0410 22:51:54.958682   57719 logs.go:276] 0 containers: []
	W0410 22:51:54.958693   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:54.958702   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:54.958761   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:55.017032   57719 cri.go:89] found id: ""
	I0410 22:51:55.017078   57719 logs.go:276] 0 containers: []
	W0410 22:51:55.017090   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:55.017101   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:55.017167   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:55.093024   57719 cri.go:89] found id: ""
	I0410 22:51:55.093059   57719 logs.go:276] 0 containers: []
	W0410 22:51:55.093070   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:55.093085   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:55.093156   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:55.142412   57719 cri.go:89] found id: ""
	I0410 22:51:55.142441   57719 logs.go:276] 0 containers: []
	W0410 22:51:55.142456   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:55.142464   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:55.142521   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:55.180116   57719 cri.go:89] found id: ""
	I0410 22:51:55.180147   57719 logs.go:276] 0 containers: []
	W0410 22:51:55.180159   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:55.180169   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:55.180186   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:55.249118   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:55.249139   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:55.249153   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:55.327558   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:55.327597   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:55.373127   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:55.373163   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:53.518589   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:56.017080   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:54.151372   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:56.650238   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:55.401716   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:57.902174   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:55.431602   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:55.431647   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:57.947559   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:57.962916   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:57.962983   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:58.000955   57719 cri.go:89] found id: ""
	I0410 22:51:58.000983   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.000990   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:58.000997   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:58.001049   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:58.040556   57719 cri.go:89] found id: ""
	I0410 22:51:58.040579   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.040586   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:58.040592   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:58.040649   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:58.079121   57719 cri.go:89] found id: ""
	I0410 22:51:58.079148   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.079155   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:58.079161   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:58.079240   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:58.119876   57719 cri.go:89] found id: ""
	I0410 22:51:58.119902   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.119914   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:58.119929   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:58.119987   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:58.160130   57719 cri.go:89] found id: ""
	I0410 22:51:58.160162   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.160173   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:58.160181   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:58.160258   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:58.198162   57719 cri.go:89] found id: ""
	I0410 22:51:58.198195   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.198207   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:58.198215   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:58.198266   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:58.235049   57719 cri.go:89] found id: ""
	I0410 22:51:58.235078   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.235089   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:58.235096   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:58.235157   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:58.275786   57719 cri.go:89] found id: ""
	I0410 22:51:58.275825   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.275845   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:58.275856   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:58.275872   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:58.316246   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:58.316277   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:58.371614   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:58.371649   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:58.386610   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:58.386646   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:58.465167   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:58.465187   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:58.465199   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:58.018362   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:00.517710   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:59.152119   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:01.650566   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:00.401148   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:02.401494   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:04.401624   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:01.049405   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:01.073251   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:01.073328   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:01.125169   57719 cri.go:89] found id: ""
	I0410 22:52:01.125201   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.125212   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:01.125220   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:01.125289   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:01.171256   57719 cri.go:89] found id: ""
	I0410 22:52:01.171289   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.171300   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:01.171308   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:01.171376   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:01.210444   57719 cri.go:89] found id: ""
	I0410 22:52:01.210478   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.210489   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:01.210503   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:01.210568   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:01.252448   57719 cri.go:89] found id: ""
	I0410 22:52:01.252473   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.252480   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:01.252486   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:01.252531   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:01.293084   57719 cri.go:89] found id: ""
	I0410 22:52:01.293117   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.293128   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:01.293136   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:01.293208   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:01.330992   57719 cri.go:89] found id: ""
	I0410 22:52:01.331019   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.331026   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:01.331032   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:01.331081   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:01.369286   57719 cri.go:89] found id: ""
	I0410 22:52:01.369315   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.369325   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:01.369331   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:01.369378   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:01.409888   57719 cri.go:89] found id: ""
	I0410 22:52:01.409916   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.409924   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:01.409933   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:01.409944   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:01.484535   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:01.484557   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:01.484569   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:01.565727   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:01.565778   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:01.606987   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:01.607018   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:01.659492   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:01.659529   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:04.174971   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:04.190302   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:04.190382   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:04.230050   57719 cri.go:89] found id: ""
	I0410 22:52:04.230080   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.230090   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:04.230097   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:04.230162   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:04.269870   57719 cri.go:89] found id: ""
	I0410 22:52:04.269902   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.269908   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:04.269914   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:04.269969   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:04.310977   57719 cri.go:89] found id: ""
	I0410 22:52:04.311008   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.311019   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:04.311026   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:04.311096   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:04.349108   57719 cri.go:89] found id: ""
	I0410 22:52:04.349136   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.349147   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:04.349154   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:04.349216   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:04.389590   57719 cri.go:89] found id: ""
	I0410 22:52:04.389613   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.389625   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:04.389633   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:04.389697   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:04.432962   57719 cri.go:89] found id: ""
	I0410 22:52:04.432989   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.433001   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:04.433008   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:04.433070   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:04.473912   57719 cri.go:89] found id: ""
	I0410 22:52:04.473946   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.473955   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:04.473960   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:04.474029   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:04.516157   57719 cri.go:89] found id: ""
	I0410 22:52:04.516182   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.516192   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:04.516203   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:04.516218   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:04.569047   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:04.569082   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:04.622639   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:04.622673   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:04.638441   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:04.638470   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:04.718203   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:04.718227   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:04.718241   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:02.518104   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:04.519509   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:06.519648   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:04.150041   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:06.150157   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:06.902111   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:08.902816   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:07.302147   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:07.315919   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:07.315984   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:07.354692   57719 cri.go:89] found id: ""
	I0410 22:52:07.354723   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.354733   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:07.354740   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:07.354803   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:07.393418   57719 cri.go:89] found id: ""
	I0410 22:52:07.393447   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.393459   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:07.393466   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:07.393525   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:07.436810   57719 cri.go:89] found id: ""
	I0410 22:52:07.436837   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.436847   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:07.436855   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:07.436920   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:07.478685   57719 cri.go:89] found id: ""
	I0410 22:52:07.478709   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.478720   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:07.478735   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:07.478792   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:07.515699   57719 cri.go:89] found id: ""
	I0410 22:52:07.515727   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.515737   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:07.515744   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:07.515805   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:07.556419   57719 cri.go:89] found id: ""
	I0410 22:52:07.556443   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.556451   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:07.556457   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:07.556560   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:07.598076   57719 cri.go:89] found id: ""
	I0410 22:52:07.598106   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.598113   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:07.598119   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:07.598183   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:07.637778   57719 cri.go:89] found id: ""
	I0410 22:52:07.637814   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.637826   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:07.637839   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:07.637854   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:07.693688   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:07.693728   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:07.709256   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:07.709289   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:07.778519   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:07.778544   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:07.778584   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:07.858937   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:07.858973   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:10.405765   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:10.422019   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:10.422083   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:09.017771   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:11.017883   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:08.151568   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:10.650989   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:11.402181   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:13.902520   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:10.463779   57719 cri.go:89] found id: ""
	I0410 22:52:10.463818   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.463829   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:10.463836   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:10.463923   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:10.503680   57719 cri.go:89] found id: ""
	I0410 22:52:10.503710   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.503718   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:10.503736   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:10.503804   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:10.545567   57719 cri.go:89] found id: ""
	I0410 22:52:10.545594   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.545605   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:10.545613   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:10.545671   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:10.590864   57719 cri.go:89] found id: ""
	I0410 22:52:10.590892   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.590901   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:10.590908   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:10.590968   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:10.634628   57719 cri.go:89] found id: ""
	I0410 22:52:10.634659   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.634670   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:10.634677   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:10.634758   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:10.681477   57719 cri.go:89] found id: ""
	I0410 22:52:10.681507   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.681526   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:10.681533   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:10.681585   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:10.725203   57719 cri.go:89] found id: ""
	I0410 22:52:10.725229   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.725328   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:10.725368   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:10.725443   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:10.764994   57719 cri.go:89] found id: ""
	I0410 22:52:10.765028   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.765036   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:10.765044   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:10.765094   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:10.808981   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:10.809012   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:10.866429   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:10.866468   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:10.882512   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:10.882537   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:10.963016   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:10.963041   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:10.963053   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:13.544552   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:13.558161   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:13.558238   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:13.596945   57719 cri.go:89] found id: ""
	I0410 22:52:13.596977   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.596988   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:13.596996   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:13.597057   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:13.637920   57719 cri.go:89] found id: ""
	I0410 22:52:13.637944   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.637951   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:13.637958   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:13.638012   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:13.676777   57719 cri.go:89] found id: ""
	I0410 22:52:13.676808   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.676819   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:13.676826   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:13.676887   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:13.714054   57719 cri.go:89] found id: ""
	I0410 22:52:13.714078   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.714086   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:13.714091   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:13.714142   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:13.757162   57719 cri.go:89] found id: ""
	I0410 22:52:13.757194   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.757206   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:13.757214   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:13.757276   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:13.793578   57719 cri.go:89] found id: ""
	I0410 22:52:13.793616   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.793629   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:13.793636   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:13.793697   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:13.831307   57719 cri.go:89] found id: ""
	I0410 22:52:13.831336   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.831346   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:13.831353   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:13.831400   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:13.872072   57719 cri.go:89] found id: ""
	I0410 22:52:13.872109   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.872117   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:13.872127   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:13.872143   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:13.926909   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:13.926947   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:13.943095   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:13.943126   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:14.015301   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:14.015336   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:14.015351   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:14.101100   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:14.101137   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:13.019599   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:15.517932   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:13.150248   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:15.650269   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:16.401396   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:18.402384   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:16.650213   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:16.664603   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:16.664677   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:16.701498   57719 cri.go:89] found id: ""
	I0410 22:52:16.701527   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.701539   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:16.701547   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:16.701618   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:16.740687   57719 cri.go:89] found id: ""
	I0410 22:52:16.740716   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.740725   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:16.740730   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:16.740789   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:16.777349   57719 cri.go:89] found id: ""
	I0410 22:52:16.777372   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.777380   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:16.777385   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:16.777454   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:16.819855   57719 cri.go:89] found id: ""
	I0410 22:52:16.819890   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.819900   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:16.819909   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:16.819973   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:16.859939   57719 cri.go:89] found id: ""
	I0410 22:52:16.859970   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.859981   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:16.859991   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:16.860056   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:16.897861   57719 cri.go:89] found id: ""
	I0410 22:52:16.897886   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.897893   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:16.897899   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:16.897962   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:16.935642   57719 cri.go:89] found id: ""
	I0410 22:52:16.935673   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.935681   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:16.935687   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:16.935733   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:16.974268   57719 cri.go:89] found id: ""
	I0410 22:52:16.974294   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.974302   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:16.974311   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:16.974327   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:17.027850   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:17.027888   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:17.043343   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:17.043379   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:17.120945   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:17.120967   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:17.120979   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:17.204831   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:17.204868   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:19.749712   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:19.764102   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:19.764181   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:19.800759   57719 cri.go:89] found id: ""
	I0410 22:52:19.800787   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.800795   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:19.800801   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:19.800851   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:19.839678   57719 cri.go:89] found id: ""
	I0410 22:52:19.839711   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.839723   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:19.839730   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:19.839791   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:19.876983   57719 cri.go:89] found id: ""
	I0410 22:52:19.877007   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.877015   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:19.877020   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:19.877081   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:19.918139   57719 cri.go:89] found id: ""
	I0410 22:52:19.918167   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.918177   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:19.918186   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:19.918243   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:19.954770   57719 cri.go:89] found id: ""
	I0410 22:52:19.954808   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.954818   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:19.954825   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:19.954881   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:19.993643   57719 cri.go:89] found id: ""
	I0410 22:52:19.993670   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.993680   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:19.993687   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:19.993746   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:20.030466   57719 cri.go:89] found id: ""
	I0410 22:52:20.030494   57719 logs.go:276] 0 containers: []
	W0410 22:52:20.030503   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:20.030510   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:20.030575   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:20.069264   57719 cri.go:89] found id: ""
	I0410 22:52:20.069291   57719 logs.go:276] 0 containers: []
	W0410 22:52:20.069299   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:20.069307   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:20.069318   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:20.117354   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:20.117382   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:20.170758   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:20.170800   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:20.187014   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:20.187055   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:20.269620   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:20.269645   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:20.269661   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:17.518440   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:20.018602   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:18.151102   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:20.151664   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:20.901836   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:23.401655   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:22.844841   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:22.861923   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:22.861983   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:22.907972   57719 cri.go:89] found id: ""
	I0410 22:52:22.908000   57719 logs.go:276] 0 containers: []
	W0410 22:52:22.908010   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:22.908017   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:22.908081   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:22.949822   57719 cri.go:89] found id: ""
	I0410 22:52:22.949851   57719 logs.go:276] 0 containers: []
	W0410 22:52:22.949861   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:22.949869   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:22.949935   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:22.989872   57719 cri.go:89] found id: ""
	I0410 22:52:22.989895   57719 logs.go:276] 0 containers: []
	W0410 22:52:22.989902   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:22.989908   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:22.989959   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:23.031881   57719 cri.go:89] found id: ""
	I0410 22:52:23.031900   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.031908   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:23.031913   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:23.031978   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:23.071691   57719 cri.go:89] found id: ""
	I0410 22:52:23.071719   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.071726   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:23.071732   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:23.071792   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:23.109961   57719 cri.go:89] found id: ""
	I0410 22:52:23.109990   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.110001   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:23.110009   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:23.110069   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:23.152955   57719 cri.go:89] found id: ""
	I0410 22:52:23.152979   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.152986   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:23.152991   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:23.153054   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:23.191883   57719 cri.go:89] found id: ""
	I0410 22:52:23.191924   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.191935   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:23.191947   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:23.191959   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:23.232692   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:23.232731   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:23.283648   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:23.283684   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:23.297701   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:23.297729   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:23.381657   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:23.381673   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:23.381685   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:22.520899   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:25.016955   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:27.018541   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:22.650053   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:25.150370   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:25.402084   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:27.402670   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:25.961531   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:25.977539   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:25.977639   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:26.021844   57719 cri.go:89] found id: ""
	I0410 22:52:26.021875   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.021886   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:26.021893   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:26.021954   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:26.064286   57719 cri.go:89] found id: ""
	I0410 22:52:26.064316   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.064327   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:26.064335   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:26.064394   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:26.104381   57719 cri.go:89] found id: ""
	I0410 22:52:26.104426   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.104437   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:26.104445   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:26.104522   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:26.143382   57719 cri.go:89] found id: ""
	I0410 22:52:26.143407   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.143417   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:26.143424   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:26.143489   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:26.179609   57719 cri.go:89] found id: ""
	I0410 22:52:26.179635   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.179646   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:26.179652   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:26.179714   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:26.217660   57719 cri.go:89] found id: ""
	I0410 22:52:26.217689   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.217695   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:26.217701   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:26.217758   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:26.254914   57719 cri.go:89] found id: ""
	I0410 22:52:26.254946   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.254956   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:26.254963   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:26.255047   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:26.293738   57719 cri.go:89] found id: ""
	I0410 22:52:26.293769   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.293779   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:26.293790   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:26.293809   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:26.366700   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:26.366725   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:26.366741   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:26.445143   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:26.445183   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:26.493175   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:26.493203   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:26.554952   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:26.554992   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:29.072225   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:29.087075   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:29.087150   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:29.131314   57719 cri.go:89] found id: ""
	I0410 22:52:29.131345   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.131357   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:29.131365   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:29.131427   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:29.169263   57719 cri.go:89] found id: ""
	I0410 22:52:29.169289   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.169298   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:29.169304   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:29.169357   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:29.209535   57719 cri.go:89] found id: ""
	I0410 22:52:29.209559   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.209570   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:29.209575   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:29.209630   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:29.251172   57719 cri.go:89] found id: ""
	I0410 22:52:29.251225   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.251233   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:29.251238   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:29.251290   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:29.296142   57719 cri.go:89] found id: ""
	I0410 22:52:29.296169   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.296179   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:29.296185   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:29.296245   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:29.336910   57719 cri.go:89] found id: ""
	I0410 22:52:29.336933   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.336940   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:29.336946   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:29.337003   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:29.396332   57719 cri.go:89] found id: ""
	I0410 22:52:29.396371   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.396382   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:29.396390   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:29.396475   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:29.438301   57719 cri.go:89] found id: ""
	I0410 22:52:29.438332   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.438340   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:29.438348   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:29.438360   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:29.482687   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:29.482711   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:29.535115   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:29.535146   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:29.551736   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:29.551760   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:29.624162   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:29.624198   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:29.624213   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:29.517873   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:31.519737   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:27.650947   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:29.651296   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:32.150101   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:29.901370   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:31.902050   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:34.401849   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:32.204355   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:32.218239   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:32.218310   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:32.255412   57719 cri.go:89] found id: ""
	I0410 22:52:32.255440   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.255451   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:32.255458   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:32.255516   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:32.293553   57719 cri.go:89] found id: ""
	I0410 22:52:32.293580   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.293591   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:32.293604   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:32.293663   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:32.332814   57719 cri.go:89] found id: ""
	I0410 22:52:32.332846   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.332855   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:32.332862   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:32.332924   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:32.371312   57719 cri.go:89] found id: ""
	I0410 22:52:32.371347   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.371368   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:32.371376   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:32.371441   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:32.407630   57719 cri.go:89] found id: ""
	I0410 22:52:32.407652   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.407659   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:32.407664   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:32.407720   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:32.444878   57719 cri.go:89] found id: ""
	I0410 22:52:32.444904   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.444914   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:32.444923   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:32.444989   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:32.490540   57719 cri.go:89] found id: ""
	I0410 22:52:32.490567   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.490578   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:32.490586   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:32.490644   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:32.528911   57719 cri.go:89] found id: ""
	I0410 22:52:32.528953   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.528961   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:32.528969   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:32.528979   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:32.608601   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:32.608626   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:32.608641   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:32.684840   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:32.684876   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:32.728092   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:32.728132   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:32.778491   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:32.778524   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:35.296228   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:35.310615   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:35.310705   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:35.377585   57719 cri.go:89] found id: ""
	I0410 22:52:35.377612   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.377623   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:35.377632   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:35.377692   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:35.417734   57719 cri.go:89] found id: ""
	I0410 22:52:35.417775   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.417796   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:35.417803   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:35.417864   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:34.017119   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:36.017526   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:34.150859   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:36.151112   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:36.402036   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:38.402201   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:35.456256   57719 cri.go:89] found id: ""
	I0410 22:52:35.456281   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.456291   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:35.456298   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:35.456382   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:35.495233   57719 cri.go:89] found id: ""
	I0410 22:52:35.495257   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.495267   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:35.495274   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:35.495333   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:35.535239   57719 cri.go:89] found id: ""
	I0410 22:52:35.535273   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.535284   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:35.535292   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:35.535352   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:35.571601   57719 cri.go:89] found id: ""
	I0410 22:52:35.571628   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.571638   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:35.571645   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:35.571708   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:35.612008   57719 cri.go:89] found id: ""
	I0410 22:52:35.612036   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.612045   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:35.612051   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:35.612099   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:35.649029   57719 cri.go:89] found id: ""
	I0410 22:52:35.649057   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.649065   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:35.649073   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:35.649084   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:35.702630   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:35.702668   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:35.718404   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:35.718433   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:35.798380   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:35.798405   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:35.798420   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:35.874049   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:35.874085   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:38.416265   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:38.430921   57719 kubeadm.go:591] duration metric: took 4m3.090666464s to restartPrimaryControlPlane
	W0410 22:52:38.431006   57719 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0410 22:52:38.431030   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0410 22:52:41.138973   57719 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.707913754s)
	I0410 22:52:41.139063   57719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:52:41.155646   57719 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:52:41.166345   57719 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:52:41.176443   57719 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:52:41.176481   57719 kubeadm.go:156] found existing configuration files:
	
	I0410 22:52:41.176547   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:52:41.186887   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:52:41.186960   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:52:41.199740   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:52:41.209843   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:52:41.209901   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:52:41.219804   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:52:41.229739   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:52:41.229807   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:52:41.240127   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:52:41.249763   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:52:41.249824   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:52:41.260148   57719 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 22:52:41.334127   57719 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0410 22:52:41.334200   57719 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 22:52:41.506104   57719 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 22:52:41.506307   57719 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 22:52:41.506488   57719 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 22:52:41.715227   57719 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 22:52:38.519180   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:41.018674   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:38.649983   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:41.152610   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:41.717460   57719 out.go:204]   - Generating certificates and keys ...
	I0410 22:52:41.717564   57719 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 22:52:41.717654   57719 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 22:52:41.717781   57719 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0410 22:52:41.717898   57719 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0410 22:52:41.718004   57719 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0410 22:52:41.718099   57719 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0410 22:52:41.718203   57719 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0410 22:52:41.718550   57719 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0410 22:52:41.719083   57719 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0410 22:52:41.719413   57719 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0410 22:52:41.719571   57719 kubeadm.go:309] [certs] Using the existing "sa" key
	I0410 22:52:41.719675   57719 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 22:52:41.998202   57719 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 22:52:42.109508   57719 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 22:52:42.315545   57719 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 22:52:42.448910   57719 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 22:52:42.465903   57719 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 22:52:42.467312   57719 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 22:52:42.467387   57719 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 22:52:42.636790   57719 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 22:52:40.402237   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:42.404435   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:42.638969   57719 out.go:204]   - Booting up control plane ...
	I0410 22:52:42.639106   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 22:52:42.652152   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 22:52:42.653843   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 22:52:42.654719   57719 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 22:52:42.658006   57719 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0410 22:52:43.518416   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:46.017894   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:43.650778   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:46.149976   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:44.902059   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:46.902549   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:49.401695   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:48.517833   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:51.018924   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:48.150825   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:50.151391   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:51.901096   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:53.902619   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:53.518616   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:55.519254   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:52.649783   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:54.651766   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:56.655687   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:55.903916   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:58.400789   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:58.017685   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:00.517303   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:59.152346   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:01.651146   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:00.901531   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:03.400690   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:02.517569   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:04.517775   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:07.017655   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:03.651728   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:05.652505   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:05.901605   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:07.902363   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:09.018576   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:11.510820   58186 pod_ready.go:81] duration metric: took 4m0.000124062s for pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace to be "Ready" ...
	E0410 22:53:11.510861   58186 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0410 22:53:11.510885   58186 pod_ready.go:38] duration metric: took 4m10.548289153s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:53:11.510918   58186 kubeadm.go:591] duration metric: took 4m18.480793797s to restartPrimaryControlPlane
	W0410 22:53:11.510993   58186 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0410 22:53:11.511019   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0410 22:53:08.151155   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:10.151358   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:10.400722   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:12.401658   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:14.401745   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:12.652391   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:14.652682   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:17.149892   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:16.900482   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:18.900789   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:19.152154   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:21.649975   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:20.902068   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:23.401500   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:22.660165   57719 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0410 22:53:22.660260   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:53:22.660520   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:53:23.653457   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:26.149469   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:25.903070   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:28.400947   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:27.660705   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:53:27.660919   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:53:28.150895   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:30.650254   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:30.401054   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:32.401994   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:32.654427   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:35.149580   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:37.150506   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:37.150533   58701 pod_ready.go:81] duration metric: took 4m0.00757056s for pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace to be "Ready" ...
	E0410 22:53:37.150544   58701 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0410 22:53:37.150552   58701 pod_ready.go:38] duration metric: took 4m5.55870495s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:53:37.150570   58701 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:53:37.150602   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:53:37.150659   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:53:37.213472   58701 cri.go:89] found id: "74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:37.213499   58701 cri.go:89] found id: ""
	I0410 22:53:37.213511   58701 logs.go:276] 1 containers: [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c]
	I0410 22:53:37.213561   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.218928   58701 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:53:37.218997   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:53:37.260045   58701 cri.go:89] found id: "34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:37.260066   58701 cri.go:89] found id: ""
	I0410 22:53:37.260073   58701 logs.go:276] 1 containers: [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072]
	I0410 22:53:37.260116   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.265329   58701 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:53:37.265393   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:53:37.306649   58701 cri.go:89] found id: "d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:37.306674   58701 cri.go:89] found id: ""
	I0410 22:53:37.306682   58701 logs.go:276] 1 containers: [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3]
	I0410 22:53:37.306729   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.311163   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:53:37.311213   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:53:37.351855   58701 cri.go:89] found id: "b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:37.351883   58701 cri.go:89] found id: ""
	I0410 22:53:37.351890   58701 logs.go:276] 1 containers: [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14]
	I0410 22:53:37.351937   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.356427   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:53:37.356497   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:53:34.900998   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:36.901173   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:39.400680   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:37.661409   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:53:37.661698   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:53:37.399224   58701 cri.go:89] found id: "7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:37.399248   58701 cri.go:89] found id: ""
	I0410 22:53:37.399257   58701 logs.go:276] 1 containers: [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b]
	I0410 22:53:37.399315   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.404314   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:53:37.404380   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:53:37.444169   58701 cri.go:89] found id: "c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:37.444196   58701 cri.go:89] found id: ""
	I0410 22:53:37.444205   58701 logs.go:276] 1 containers: [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39]
	I0410 22:53:37.444264   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.448618   58701 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:53:37.448693   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:53:37.487481   58701 cri.go:89] found id: ""
	I0410 22:53:37.487507   58701 logs.go:276] 0 containers: []
	W0410 22:53:37.487514   58701 logs.go:278] No container was found matching "kindnet"
	I0410 22:53:37.487519   58701 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0410 22:53:37.487566   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0410 22:53:37.531000   58701 cri.go:89] found id: "3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:37.531018   58701 cri.go:89] found id: "912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:37.531022   58701 cri.go:89] found id: ""
	I0410 22:53:37.531029   58701 logs.go:276] 2 containers: [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7]
	I0410 22:53:37.531081   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.535679   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.539974   58701 logs.go:123] Gathering logs for kubelet ...
	I0410 22:53:37.539998   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:53:37.601043   58701 logs.go:123] Gathering logs for dmesg ...
	I0410 22:53:37.601086   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:53:37.616427   58701 logs.go:123] Gathering logs for kube-apiserver [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c] ...
	I0410 22:53:37.616458   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:37.669951   58701 logs.go:123] Gathering logs for etcd [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072] ...
	I0410 22:53:37.669983   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:37.716243   58701 logs.go:123] Gathering logs for kube-controller-manager [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39] ...
	I0410 22:53:37.716273   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:37.774644   58701 logs.go:123] Gathering logs for storage-provisioner [912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7] ...
	I0410 22:53:37.774678   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:37.821033   58701 logs.go:123] Gathering logs for container status ...
	I0410 22:53:37.821077   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:53:37.883644   58701 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:53:37.883678   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0410 22:53:38.019289   58701 logs.go:123] Gathering logs for coredns [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3] ...
	I0410 22:53:38.019320   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:38.057708   58701 logs.go:123] Gathering logs for kube-scheduler [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14] ...
	I0410 22:53:38.057739   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:38.100119   58701 logs.go:123] Gathering logs for kube-proxy [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b] ...
	I0410 22:53:38.100149   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:38.143845   58701 logs.go:123] Gathering logs for storage-provisioner [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195] ...
	I0410 22:53:38.143875   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:38.186718   58701 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:53:38.186749   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:53:41.168951   58701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:53:41.186828   58701 api_server.go:72] duration metric: took 4m17.343179611s to wait for apiserver process to appear ...
	I0410 22:53:41.186866   58701 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:53:41.186911   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:53:41.186972   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:53:41.228167   58701 cri.go:89] found id: "74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:41.228194   58701 cri.go:89] found id: ""
	I0410 22:53:41.228201   58701 logs.go:276] 1 containers: [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c]
	I0410 22:53:41.228251   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.232754   58701 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:53:41.232812   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:53:41.271497   58701 cri.go:89] found id: "34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:41.271519   58701 cri.go:89] found id: ""
	I0410 22:53:41.271527   58701 logs.go:276] 1 containers: [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072]
	I0410 22:53:41.271575   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.276165   58701 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:53:41.276234   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:53:41.319164   58701 cri.go:89] found id: "d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:41.319187   58701 cri.go:89] found id: ""
	I0410 22:53:41.319195   58701 logs.go:276] 1 containers: [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3]
	I0410 22:53:41.319251   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.323627   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:53:41.323696   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:53:41.366648   58701 cri.go:89] found id: "b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:41.366671   58701 cri.go:89] found id: ""
	I0410 22:53:41.366678   58701 logs.go:276] 1 containers: [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14]
	I0410 22:53:41.366733   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.371132   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:53:41.371197   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:53:41.412956   58701 cri.go:89] found id: "7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:41.412974   58701 cri.go:89] found id: ""
	I0410 22:53:41.412982   58701 logs.go:276] 1 containers: [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b]
	I0410 22:53:41.413034   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.417441   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:53:41.417495   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:53:41.460008   58701 cri.go:89] found id: "c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:41.460037   58701 cri.go:89] found id: ""
	I0410 22:53:41.460048   58701 logs.go:276] 1 containers: [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39]
	I0410 22:53:41.460105   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.464422   58701 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:53:41.464492   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:53:41.504095   58701 cri.go:89] found id: ""
	I0410 22:53:41.504126   58701 logs.go:276] 0 containers: []
	W0410 22:53:41.504134   58701 logs.go:278] No container was found matching "kindnet"
	I0410 22:53:41.504140   58701 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0410 22:53:41.504199   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0410 22:53:41.543443   58701 cri.go:89] found id: "3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:41.543467   58701 cri.go:89] found id: "912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:41.543473   58701 cri.go:89] found id: ""
	I0410 22:53:41.543481   58701 logs.go:276] 2 containers: [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7]
	I0410 22:53:41.543540   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.548182   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.552917   58701 logs.go:123] Gathering logs for kube-proxy [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b] ...
	I0410 22:53:41.552941   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:41.601620   58701 logs.go:123] Gathering logs for kube-controller-manager [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39] ...
	I0410 22:53:41.601652   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:41.653090   58701 logs.go:123] Gathering logs for storage-provisioner [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195] ...
	I0410 22:53:41.653124   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:41.692683   58701 logs.go:123] Gathering logs for storage-provisioner [912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7] ...
	I0410 22:53:41.692711   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:41.736312   58701 logs.go:123] Gathering logs for dmesg ...
	I0410 22:53:41.736353   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:53:41.753242   58701 logs.go:123] Gathering logs for kube-apiserver [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c] ...
	I0410 22:53:41.753283   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:41.812881   58701 logs.go:123] Gathering logs for etcd [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072] ...
	I0410 22:53:41.812910   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:41.860686   58701 logs.go:123] Gathering logs for coredns [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3] ...
	I0410 22:53:41.860714   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:41.902523   58701 logs.go:123] Gathering logs for container status ...
	I0410 22:53:41.902546   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:53:41.945812   58701 logs.go:123] Gathering logs for kubelet ...
	I0410 22:53:41.945848   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:53:42.001012   58701 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:53:42.001046   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0410 22:53:42.123971   58701 logs.go:123] Gathering logs for kube-scheduler [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14] ...
	I0410 22:53:42.124000   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:42.168773   58701 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:53:42.168806   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:53:41.405604   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:43.901172   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:43.595677   58186 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.084634816s)
	I0410 22:53:43.595765   58186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:53:43.613470   58186 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:53:43.624876   58186 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:53:43.638564   58186 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:53:43.638592   58186 kubeadm.go:156] found existing configuration files:
	
	I0410 22:53:43.638641   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:53:43.652554   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:53:43.652608   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:53:43.664263   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:53:43.674443   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:53:43.674497   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:53:43.695444   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:53:43.705446   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:53:43.705518   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:53:43.716451   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:53:43.726343   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:53:43.726407   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:53:43.736859   58186 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 22:53:43.957994   58186 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 22:53:45.115742   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:53:45.120239   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 200:
	ok
	I0410 22:53:45.121662   58701 api_server.go:141] control plane version: v1.29.3
	I0410 22:53:45.121690   58701 api_server.go:131] duration metric: took 3.934815447s to wait for apiserver health ...
	I0410 22:53:45.121699   58701 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:53:45.121727   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:53:45.121780   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:53:45.172291   58701 cri.go:89] found id: "74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:45.172315   58701 cri.go:89] found id: ""
	I0410 22:53:45.172324   58701 logs.go:276] 1 containers: [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c]
	I0410 22:53:45.172382   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.177041   58701 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:53:45.177103   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:53:45.213853   58701 cri.go:89] found id: "34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:45.213880   58701 cri.go:89] found id: ""
	I0410 22:53:45.213889   58701 logs.go:276] 1 containers: [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072]
	I0410 22:53:45.213944   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.218478   58701 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:53:45.218546   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:53:45.268753   58701 cri.go:89] found id: "d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:45.268779   58701 cri.go:89] found id: ""
	I0410 22:53:45.268792   58701 logs.go:276] 1 containers: [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3]
	I0410 22:53:45.268843   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.273223   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:53:45.273291   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:53:45.314032   58701 cri.go:89] found id: "b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:45.314057   58701 cri.go:89] found id: ""
	I0410 22:53:45.314066   58701 logs.go:276] 1 containers: [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14]
	I0410 22:53:45.314115   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.318671   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:53:45.318740   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:53:45.356139   58701 cri.go:89] found id: "7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:45.356167   58701 cri.go:89] found id: ""
	I0410 22:53:45.356177   58701 logs.go:276] 1 containers: [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b]
	I0410 22:53:45.356234   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.361449   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:53:45.361520   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:53:45.405153   58701 cri.go:89] found id: "c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:45.405174   58701 cri.go:89] found id: ""
	I0410 22:53:45.405181   58701 logs.go:276] 1 containers: [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39]
	I0410 22:53:45.405230   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.409795   58701 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:53:45.409871   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:53:45.451984   58701 cri.go:89] found id: ""
	I0410 22:53:45.452016   58701 logs.go:276] 0 containers: []
	W0410 22:53:45.452026   58701 logs.go:278] No container was found matching "kindnet"
	I0410 22:53:45.452034   58701 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0410 22:53:45.452095   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0410 22:53:45.491612   58701 cri.go:89] found id: "3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:45.491650   58701 cri.go:89] found id: "912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:45.491656   58701 cri.go:89] found id: ""
	I0410 22:53:45.491665   58701 logs.go:276] 2 containers: [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7]
	I0410 22:53:45.491724   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.496253   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.500723   58701 logs.go:123] Gathering logs for kube-proxy [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b] ...
	I0410 22:53:45.500751   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:45.557083   58701 logs.go:123] Gathering logs for kube-controller-manager [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39] ...
	I0410 22:53:45.557118   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:45.616768   58701 logs.go:123] Gathering logs for storage-provisioner [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195] ...
	I0410 22:53:45.616804   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:45.664097   58701 logs.go:123] Gathering logs for storage-provisioner [912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7] ...
	I0410 22:53:45.664133   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:45.707920   58701 logs.go:123] Gathering logs for container status ...
	I0410 22:53:45.707957   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:53:45.751862   58701 logs.go:123] Gathering logs for kube-apiserver [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c] ...
	I0410 22:53:45.751898   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:45.806584   58701 logs.go:123] Gathering logs for coredns [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3] ...
	I0410 22:53:45.806619   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:45.846145   58701 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:53:45.846170   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0410 22:53:45.970766   58701 logs.go:123] Gathering logs for etcd [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072] ...
	I0410 22:53:45.970796   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:46.024049   58701 logs.go:123] Gathering logs for kube-scheduler [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14] ...
	I0410 22:53:46.024081   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:46.067009   58701 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:53:46.067048   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:53:46.462765   58701 logs.go:123] Gathering logs for kubelet ...
	I0410 22:53:46.462812   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:53:46.520007   58701 logs.go:123] Gathering logs for dmesg ...
	I0410 22:53:46.520049   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:53:49.047137   58701 system_pods.go:59] 8 kube-system pods found
	I0410 22:53:49.047166   58701 system_pods.go:61] "coredns-76f75df574-ghnvx" [88ebd9b0-ecf0-4037-b5b0-547dad2354ba] Running
	I0410 22:53:49.047170   58701 system_pods.go:61] "etcd-default-k8s-diff-port-519831" [e03c3075-b377-41a8-8f68-5a424fafd6a1] Running
	I0410 22:53:49.047174   58701 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-519831" [a538137b-08ce-4feb-a420-2e3ad7125b14] Running
	I0410 22:53:49.047177   58701 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-519831" [05d154e2-69cf-40dc-a4ba-ed0d65be4365] Running
	I0410 22:53:49.047180   58701 system_pods.go:61] "kube-proxy-5mbwx" [44724487-9539-4079-9fd6-40cb70208b95] Running
	I0410 22:53:49.047183   58701 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-519831" [a587ef20-140c-40b0-b306-d5f5c595f4a6] Running
	I0410 22:53:49.047189   58701 system_pods.go:61] "metrics-server-57f55c9bc5-9l2hc" [2f5cda2f-4d8f-4798-954e-5ef588f2b88f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:53:49.047192   58701 system_pods.go:61] "storage-provisioner" [e4e09f42-54ba-480e-a020-1ca071a54558] Running
	I0410 22:53:49.047201   58701 system_pods.go:74] duration metric: took 3.925495812s to wait for pod list to return data ...
	I0410 22:53:49.047208   58701 default_sa.go:34] waiting for default service account to be created ...
	I0410 22:53:49.050341   58701 default_sa.go:45] found service account: "default"
	I0410 22:53:49.050363   58701 default_sa.go:55] duration metric: took 3.148222ms for default service account to be created ...
	I0410 22:53:49.050371   58701 system_pods.go:116] waiting for k8s-apps to be running ...
	I0410 22:53:49.056364   58701 system_pods.go:86] 8 kube-system pods found
	I0410 22:53:49.056390   58701 system_pods.go:89] "coredns-76f75df574-ghnvx" [88ebd9b0-ecf0-4037-b5b0-547dad2354ba] Running
	I0410 22:53:49.056414   58701 system_pods.go:89] "etcd-default-k8s-diff-port-519831" [e03c3075-b377-41a8-8f68-5a424fafd6a1] Running
	I0410 22:53:49.056423   58701 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-519831" [a538137b-08ce-4feb-a420-2e3ad7125b14] Running
	I0410 22:53:49.056431   58701 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-519831" [05d154e2-69cf-40dc-a4ba-ed0d65be4365] Running
	I0410 22:53:49.056437   58701 system_pods.go:89] "kube-proxy-5mbwx" [44724487-9539-4079-9fd6-40cb70208b95] Running
	I0410 22:53:49.056444   58701 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-519831" [a587ef20-140c-40b0-b306-d5f5c595f4a6] Running
	I0410 22:53:49.056455   58701 system_pods.go:89] "metrics-server-57f55c9bc5-9l2hc" [2f5cda2f-4d8f-4798-954e-5ef588f2b88f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:53:49.056462   58701 system_pods.go:89] "storage-provisioner" [e4e09f42-54ba-480e-a020-1ca071a54558] Running
	I0410 22:53:49.056475   58701 system_pods.go:126] duration metric: took 6.097239ms to wait for k8s-apps to be running ...
	I0410 22:53:49.056492   58701 system_svc.go:44] waiting for kubelet service to be running ....
	I0410 22:53:49.056537   58701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:53:49.077239   58701 system_svc.go:56] duration metric: took 20.737127ms WaitForService to wait for kubelet
	I0410 22:53:49.077269   58701 kubeadm.go:576] duration metric: took 4m25.233626302s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 22:53:49.077297   58701 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:53:49.080463   58701 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:53:49.080486   58701 node_conditions.go:123] node cpu capacity is 2
	I0410 22:53:49.080497   58701 node_conditions.go:105] duration metric: took 3.195662ms to run NodePressure ...
	I0410 22:53:49.080508   58701 start.go:240] waiting for startup goroutines ...
	I0410 22:53:49.080515   58701 start.go:245] waiting for cluster config update ...
	I0410 22:53:49.080525   58701 start.go:254] writing updated cluster config ...
	I0410 22:53:49.080805   58701 ssh_runner.go:195] Run: rm -f paused
	I0410 22:53:49.141489   58701 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0410 22:53:49.143597   58701 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-519831" cluster and "default" namespace by default
	I0410 22:53:45.903632   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:48.403981   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:53.064071   58186 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0410 22:53:53.064154   58186 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 22:53:53.064260   58186 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 22:53:53.064429   58186 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 22:53:53.064574   58186 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 22:53:53.064670   58186 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 22:53:53.066595   58186 out.go:204]   - Generating certificates and keys ...
	I0410 22:53:53.066703   58186 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 22:53:53.066808   58186 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 22:53:53.066929   58186 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0410 22:53:53.067023   58186 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0410 22:53:53.067155   58186 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0410 22:53:53.067235   58186 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0410 22:53:53.067329   58186 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0410 22:53:53.067433   58186 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0410 22:53:53.067546   58186 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0410 22:53:53.067655   58186 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0410 22:53:53.067733   58186 kubeadm.go:309] [certs] Using the existing "sa" key
	I0410 22:53:53.067890   58186 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 22:53:53.067961   58186 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 22:53:53.068049   58186 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0410 22:53:53.068132   58186 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 22:53:53.068232   58186 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 22:53:53.068310   58186 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 22:53:53.068379   58186 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 22:53:53.068510   58186 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 22:53:53.070126   58186 out.go:204]   - Booting up control plane ...
	I0410 22:53:53.070219   58186 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 22:53:53.070324   58186 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 22:53:53.070425   58186 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 22:53:53.070565   58186 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 22:53:53.070686   58186 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 22:53:53.070748   58186 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 22:53:53.070973   58186 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0410 22:53:53.071083   58186 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002820 seconds
	I0410 22:53:53.071249   58186 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0410 22:53:53.071424   58186 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0410 22:53:53.071485   58186 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0410 22:53:53.071624   58186 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-706500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0410 22:53:53.071680   58186 kubeadm.go:309] [bootstrap-token] Using token: 0wvld6.jntz9ft9bn5g46le
	I0410 22:53:53.073567   58186 out.go:204]   - Configuring RBAC rules ...
	I0410 22:53:53.073708   58186 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0410 22:53:53.073819   58186 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0410 22:53:53.074015   58186 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0410 22:53:53.074206   58186 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0410 22:53:53.074370   58186 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0410 22:53:53.074548   58186 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0410 22:53:53.074726   58186 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0410 22:53:53.074798   58186 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0410 22:53:53.074873   58186 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0410 22:53:53.074884   58186 kubeadm.go:309] 
	I0410 22:53:53.074956   58186 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0410 22:53:53.074978   58186 kubeadm.go:309] 
	I0410 22:53:53.075077   58186 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0410 22:53:53.075088   58186 kubeadm.go:309] 
	I0410 22:53:53.075119   58186 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0410 22:53:53.075191   58186 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0410 22:53:53.075262   58186 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0410 22:53:53.075273   58186 kubeadm.go:309] 
	I0410 22:53:53.075337   58186 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0410 22:53:53.075353   58186 kubeadm.go:309] 
	I0410 22:53:53.075419   58186 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0410 22:53:53.075437   58186 kubeadm.go:309] 
	I0410 22:53:53.075503   58186 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0410 22:53:53.075621   58186 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0410 22:53:53.075714   58186 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0410 22:53:53.075724   58186 kubeadm.go:309] 
	I0410 22:53:53.075829   58186 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0410 22:53:53.075936   58186 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0410 22:53:53.075953   58186 kubeadm.go:309] 
	I0410 22:53:53.076058   58186 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 0wvld6.jntz9ft9bn5g46le \
	I0410 22:53:53.076196   58186 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f23510b5ac28f8f42919becc6f4945068a05a3fcf79470ca514b0af3de7a2bb0 \
	I0410 22:53:53.076253   58186 kubeadm.go:309] 	--control-plane 
	I0410 22:53:53.076270   58186 kubeadm.go:309] 
	I0410 22:53:53.076387   58186 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0410 22:53:53.076422   58186 kubeadm.go:309] 
	I0410 22:53:53.076516   58186 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 0wvld6.jntz9ft9bn5g46le \
	I0410 22:53:53.076661   58186 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f23510b5ac28f8f42919becc6f4945068a05a3fcf79470ca514b0af3de7a2bb0 
	I0410 22:53:53.076711   58186 cni.go:84] Creating CNI manager for ""
	I0410 22:53:53.076726   58186 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:53:53.078503   58186 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0410 22:53:50.902397   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:53.403449   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:53.079631   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0410 22:53:53.132043   58186 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0410 22:53:53.167760   58186 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0410 22:53:53.167847   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:53.167870   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-706500 minikube.k8s.io/updated_at=2024_04_10T22_53_53_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2 minikube.k8s.io/name=embed-certs-706500 minikube.k8s.io/primary=true
	I0410 22:53:53.511359   58186 ops.go:34] apiserver oom_adj: -16
	I0410 22:53:53.511506   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:54.012080   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:54.511816   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:55.011883   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:55.511809   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:56.011572   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:56.512114   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:57.011878   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:55.900548   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:57.901541   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:57.662444   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:53:57.662687   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:53:57.511726   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:58.011563   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:58.512617   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:59.012145   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:59.512448   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:00.012278   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:00.512290   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:01.012507   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:01.512415   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:02.011660   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:00.401622   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:54:02.902558   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:54:02.511581   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:03.012326   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:03.512539   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:04.012085   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:04.512496   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:05.011911   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:05.512180   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:05.619801   58186 kubeadm.go:1107] duration metric: took 12.452015223s to wait for elevateKubeSystemPrivileges
	W0410 22:54:05.619839   58186 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0410 22:54:05.619847   58186 kubeadm.go:393] duration metric: took 5m12.640298551s to StartCluster
	I0410 22:54:05.619862   58186 settings.go:142] acquiring lock: {Name:mk5dc8e9a07a91433645b19ffba859d70c73be71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:54:05.619936   58186 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:54:05.621989   58186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/kubeconfig: {Name:mkd6f498bec10d7b0d2291f83f5e27766227bc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:54:05.622331   58186 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0410 22:54:05.624233   58186 out.go:177] * Verifying Kubernetes components...
	I0410 22:54:05.622444   58186 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0410 22:54:05.622516   58186 config.go:182] Loaded profile config "embed-certs-706500": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:54:05.625850   58186 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-706500"
	I0410 22:54:05.625872   58186 addons.go:69] Setting default-storageclass=true in profile "embed-certs-706500"
	I0410 22:54:05.625882   58186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:54:05.625893   58186 addons.go:69] Setting metrics-server=true in profile "embed-certs-706500"
	I0410 22:54:05.625924   58186 addons.go:234] Setting addon metrics-server=true in "embed-certs-706500"
	W0410 22:54:05.625930   58186 addons.go:243] addon metrics-server should already be in state true
	I0410 22:54:05.625954   58186 host.go:66] Checking if "embed-certs-706500" exists ...
	I0410 22:54:05.625888   58186 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-706500"
	I0410 22:54:05.625903   58186 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-706500"
	W0410 22:54:05.625982   58186 addons.go:243] addon storage-provisioner should already be in state true
	I0410 22:54:05.626012   58186 host.go:66] Checking if "embed-certs-706500" exists ...
	I0410 22:54:05.626365   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.626407   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.626421   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.626440   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.626441   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.626442   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.643647   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44787
	I0410 22:54:05.643758   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41863
	I0410 22:54:05.644070   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45225
	I0410 22:54:05.644101   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.644253   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.644825   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.644856   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.644825   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.644883   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.644915   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.645239   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.645419   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetState
	I0410 22:54:05.645475   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.645489   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.645501   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.646021   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.646035   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.646062   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.646588   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.646619   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.648242   58186 addons.go:234] Setting addon default-storageclass=true in "embed-certs-706500"
	W0410 22:54:05.648261   58186 addons.go:243] addon default-storageclass should already be in state true
	I0410 22:54:05.648282   58186 host.go:66] Checking if "embed-certs-706500" exists ...
	I0410 22:54:05.648555   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.648582   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.661773   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37117
	I0410 22:54:05.662556   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.663049   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.663073   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.663474   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.663708   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetState
	I0410 22:54:05.664716   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44297
	I0410 22:54:05.665027   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.665617   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.665634   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.665706   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46269
	I0410 22:54:05.666342   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.666343   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:54:05.665946   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.668790   58186 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:54:05.667015   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.667244   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.670336   58186 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:54:05.670357   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0410 22:54:05.670374   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:54:05.668826   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.668843   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.671350   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.671633   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetState
	I0410 22:54:05.673653   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:54:05.675310   58186 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0410 22:54:05.674011   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.674533   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:54:05.676671   58186 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0410 22:54:05.676677   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:54:05.676690   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0410 22:54:05.676710   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:54:05.676713   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.676821   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:54:05.676976   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:54:05.677117   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:54:05.680146   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.680927   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:54:05.680964   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.681136   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:54:05.681515   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:54:05.681681   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:54:05.681834   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:54:05.688424   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43155
	I0410 22:54:05.688861   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.689299   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.689320   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.689589   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.689741   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetState
	I0410 22:54:05.691090   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:54:05.691335   58186 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0410 22:54:05.691353   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0410 22:54:05.691369   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:54:05.694552   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.695080   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:54:05.695118   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.695426   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:54:05.695771   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:54:05.695939   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:54:05.696084   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:54:05.860032   58186 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:54:05.881036   58186 node_ready.go:35] waiting up to 6m0s for node "embed-certs-706500" to be "Ready" ...
	I0410 22:54:05.891218   58186 node_ready.go:49] node "embed-certs-706500" has status "Ready":"True"
	I0410 22:54:05.891237   58186 node_ready.go:38] duration metric: took 10.166143ms for node "embed-certs-706500" to be "Ready" ...
	I0410 22:54:05.891247   58186 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:54:05.899013   58186 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-bvdp5" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:06.064031   58186 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0410 22:54:06.064051   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0410 22:54:06.065727   58186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:54:06.075127   58186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0410 22:54:06.140574   58186 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0410 22:54:06.140607   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0410 22:54:06.216389   58186 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:54:06.216428   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0410 22:54:06.356117   58186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:54:07.409983   58186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.334826611s)
	I0410 22:54:07.410039   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.410052   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.410103   58186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.344342448s)
	I0410 22:54:07.410184   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.410199   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.410313   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Closing plugin on server side
	I0410 22:54:07.410321   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.410362   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.410371   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.410382   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.410452   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.410505   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.410519   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.410531   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.410465   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Closing plugin on server side
	I0410 22:54:07.410678   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.410765   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.410802   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.410820   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.410822   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Closing plugin on server side
	I0410 22:54:07.438723   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.438742   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.439085   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.439104   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.439085   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Closing plugin on server side
	I0410 22:54:07.738187   58186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.382031326s)
	I0410 22:54:07.738252   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.738267   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.738556   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.738586   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.738597   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.738604   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.738865   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.738885   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.738908   58186 addons.go:470] Verifying addon metrics-server=true in "embed-certs-706500"
	I0410 22:54:07.741639   58186 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0410 22:54:05.403374   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:54:07.903041   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:54:08.895154   57270 pod_ready.go:81] duration metric: took 4m0.000708165s for pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace to be "Ready" ...
	E0410 22:54:08.895186   57270 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace to be "Ready" (will not retry!)
	I0410 22:54:08.895214   57270 pod_ready.go:38] duration metric: took 4m14.550044852s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:54:08.895246   57270 kubeadm.go:591] duration metric: took 4m22.444968141s to restartPrimaryControlPlane
	W0410 22:54:08.895308   57270 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0410 22:54:08.895339   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0410 22:54:07.742954   58186 addons.go:505] duration metric: took 2.120520274s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0410 22:54:07.910203   58186 pod_ready.go:102] pod "coredns-76f75df574-bvdp5" in "kube-system" namespace has status "Ready":"False"
	I0410 22:54:08.906369   58186 pod_ready.go:92] pod "coredns-76f75df574-bvdp5" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:08.906394   58186 pod_ready.go:81] duration metric: took 3.007348288s for pod "coredns-76f75df574-bvdp5" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.906407   58186 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-v2pp5" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.913564   58186 pod_ready.go:92] pod "coredns-76f75df574-v2pp5" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:08.913582   58186 pod_ready.go:81] duration metric: took 7.168463ms for pod "coredns-76f75df574-v2pp5" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.913592   58186 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.919270   58186 pod_ready.go:92] pod "etcd-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:08.919296   58186 pod_ready.go:81] duration metric: took 5.696297ms for pod "etcd-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.919308   58186 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.924389   58186 pod_ready.go:92] pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:08.924430   58186 pod_ready.go:81] duration metric: took 5.111624ms for pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.924443   58186 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.929296   58186 pod_ready.go:92] pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:08.929320   58186 pod_ready.go:81] duration metric: took 4.869073ms for pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.929333   58186 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xj5nq" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:09.305730   58186 pod_ready.go:92] pod "kube-proxy-xj5nq" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:09.305756   58186 pod_ready.go:81] duration metric: took 376.415901ms for pod "kube-proxy-xj5nq" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:09.305770   58186 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:09.703841   58186 pod_ready.go:92] pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:09.703869   58186 pod_ready.go:81] duration metric: took 398.090582ms for pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:09.703881   58186 pod_ready.go:38] duration metric: took 3.812625835s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:54:09.703898   58186 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:54:09.703957   58186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:54:09.720728   58186 api_server.go:72] duration metric: took 4.098354983s to wait for apiserver process to appear ...
	I0410 22:54:09.720763   58186 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:54:09.720786   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:54:09.726522   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I0410 22:54:09.727951   58186 api_server.go:141] control plane version: v1.29.3
	I0410 22:54:09.727979   58186 api_server.go:131] duration metric: took 7.20731ms to wait for apiserver health ...
	I0410 22:54:09.727989   58186 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:54:09.908166   58186 system_pods.go:59] 9 kube-system pods found
	I0410 22:54:09.908203   58186 system_pods.go:61] "coredns-76f75df574-bvdp5" [1cc8a326-77ef-469f-abf7-082ff8a44782] Running
	I0410 22:54:09.908212   58186 system_pods.go:61] "coredns-76f75df574-v2pp5" [2138fb5e-9c16-4a25-85d3-3d84b361a1e8] Running
	I0410 22:54:09.908217   58186 system_pods.go:61] "etcd-embed-certs-706500" [4a4b25f6-f8b7-49a2-9dfb-74d480775de7] Running
	I0410 22:54:09.908222   58186 system_pods.go:61] "kube-apiserver-embed-certs-706500" [33bf3126-e3fa-49f8-829d-8fb5ab407062] Running
	I0410 22:54:09.908227   58186 system_pods.go:61] "kube-controller-manager-embed-certs-706500" [97ca8487-eb31-43f8-ab20-873a134bdcad] Running
	I0410 22:54:09.908232   58186 system_pods.go:61] "kube-proxy-xj5nq" [c1bb1878-3e4b-4647-a3a7-cb327ccbd364] Running
	I0410 22:54:09.908236   58186 system_pods.go:61] "kube-scheduler-embed-certs-706500" [977f178e-11a1-46a9-87a1-04a5a915c267] Running
	I0410 22:54:09.908246   58186 system_pods.go:61] "metrics-server-57f55c9bc5-9mrmz" [a4ccd29a-d27e-4291-ac8c-3135d65f8a2a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:54:09.908251   58186 system_pods.go:61] "storage-provisioner" [8ad8e533-69ca-4eb5-9595-e6808dc0ff1a] Running
	I0410 22:54:09.908263   58186 system_pods.go:74] duration metric: took 180.267138ms to wait for pod list to return data ...
	I0410 22:54:09.908276   58186 default_sa.go:34] waiting for default service account to be created ...
	I0410 22:54:10.103556   58186 default_sa.go:45] found service account: "default"
	I0410 22:54:10.103586   58186 default_sa.go:55] duration metric: took 195.301798ms for default service account to be created ...
	I0410 22:54:10.103597   58186 system_pods.go:116] waiting for k8s-apps to be running ...
	I0410 22:54:10.309537   58186 system_pods.go:86] 9 kube-system pods found
	I0410 22:54:10.309566   58186 system_pods.go:89] "coredns-76f75df574-bvdp5" [1cc8a326-77ef-469f-abf7-082ff8a44782] Running
	I0410 22:54:10.309572   58186 system_pods.go:89] "coredns-76f75df574-v2pp5" [2138fb5e-9c16-4a25-85d3-3d84b361a1e8] Running
	I0410 22:54:10.309578   58186 system_pods.go:89] "etcd-embed-certs-706500" [4a4b25f6-f8b7-49a2-9dfb-74d480775de7] Running
	I0410 22:54:10.309583   58186 system_pods.go:89] "kube-apiserver-embed-certs-706500" [33bf3126-e3fa-49f8-829d-8fb5ab407062] Running
	I0410 22:54:10.309588   58186 system_pods.go:89] "kube-controller-manager-embed-certs-706500" [97ca8487-eb31-43f8-ab20-873a134bdcad] Running
	I0410 22:54:10.309592   58186 system_pods.go:89] "kube-proxy-xj5nq" [c1bb1878-3e4b-4647-a3a7-cb327ccbd364] Running
	I0410 22:54:10.309596   58186 system_pods.go:89] "kube-scheduler-embed-certs-706500" [977f178e-11a1-46a9-87a1-04a5a915c267] Running
	I0410 22:54:10.309602   58186 system_pods.go:89] "metrics-server-57f55c9bc5-9mrmz" [a4ccd29a-d27e-4291-ac8c-3135d65f8a2a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:54:10.309607   58186 system_pods.go:89] "storage-provisioner" [8ad8e533-69ca-4eb5-9595-e6808dc0ff1a] Running
	I0410 22:54:10.309617   58186 system_pods.go:126] duration metric: took 206.014442ms to wait for k8s-apps to be running ...
	I0410 22:54:10.309624   58186 system_svc.go:44] waiting for kubelet service to be running ....
	I0410 22:54:10.309666   58186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:54:10.324614   58186 system_svc.go:56] duration metric: took 14.97975ms WaitForService to wait for kubelet
	I0410 22:54:10.324651   58186 kubeadm.go:576] duration metric: took 4.702277594s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 22:54:10.324669   58186 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:54:10.503911   58186 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:54:10.503939   58186 node_conditions.go:123] node cpu capacity is 2
	I0410 22:54:10.503949   58186 node_conditions.go:105] duration metric: took 179.27538ms to run NodePressure ...
	I0410 22:54:10.503959   58186 start.go:240] waiting for startup goroutines ...
	I0410 22:54:10.503966   58186 start.go:245] waiting for cluster config update ...
	I0410 22:54:10.503975   58186 start.go:254] writing updated cluster config ...
	I0410 22:54:10.504242   58186 ssh_runner.go:195] Run: rm -f paused
	I0410 22:54:10.555500   58186 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0410 22:54:10.557941   58186 out.go:177] * Done! kubectl is now configured to use "embed-certs-706500" cluster and "default" namespace by default
	I0410 22:54:37.664290   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:54:37.664604   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:54:37.664634   57719 kubeadm.go:309] 
	I0410 22:54:37.664776   57719 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0410 22:54:37.664843   57719 kubeadm.go:309] 		timed out waiting for the condition
	I0410 22:54:37.664854   57719 kubeadm.go:309] 
	I0410 22:54:37.664901   57719 kubeadm.go:309] 	This error is likely caused by:
	I0410 22:54:37.664968   57719 kubeadm.go:309] 		- The kubelet is not running
	I0410 22:54:37.665086   57719 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0410 22:54:37.665101   57719 kubeadm.go:309] 
	I0410 22:54:37.665245   57719 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0410 22:54:37.665313   57719 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0410 22:54:37.665360   57719 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0410 22:54:37.665372   57719 kubeadm.go:309] 
	I0410 22:54:37.665579   57719 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0410 22:54:37.665695   57719 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0410 22:54:37.665707   57719 kubeadm.go:309] 
	I0410 22:54:37.665868   57719 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0410 22:54:37.666063   57719 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0410 22:54:37.666192   57719 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0410 22:54:37.666272   57719 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0410 22:54:37.666284   57719 kubeadm.go:309] 
	I0410 22:54:37.667202   57719 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 22:54:37.667329   57719 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0410 22:54:37.667420   57719 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0410 22:54:37.667555   57719 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0410 22:54:37.667623   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0410 22:54:40.975782   57270 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.080419546s)
	I0410 22:54:40.975854   57270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:54:40.993677   57270 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:54:41.006185   57270 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:54:41.016820   57270 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:54:41.016850   57270 kubeadm.go:156] found existing configuration files:
	
	I0410 22:54:41.016985   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:54:41.026802   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:54:41.026871   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:54:41.036992   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:54:41.046896   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:54:41.046962   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:54:41.057184   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:54:41.067261   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:54:41.067321   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:54:41.077846   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:54:41.087745   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:54:41.087795   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:54:41.098660   57270 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 22:54:41.159736   57270 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-rc.1
	I0410 22:54:41.159807   57270 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 22:54:41.316137   57270 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 22:54:41.316279   57270 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 22:54:41.316446   57270 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 22:54:41.559720   57270 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 22:54:41.561946   57270 out.go:204]   - Generating certificates and keys ...
	I0410 22:54:41.562039   57270 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 22:54:41.562141   57270 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 22:54:41.562211   57270 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0410 22:54:41.562275   57270 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0410 22:54:41.562352   57270 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0410 22:54:41.562460   57270 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0410 22:54:41.562572   57270 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0410 22:54:41.562667   57270 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0410 22:54:41.562803   57270 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0410 22:54:41.562917   57270 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0410 22:54:41.562992   57270 kubeadm.go:309] [certs] Using the existing "sa" key
	I0410 22:54:41.563081   57270 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 22:54:41.723729   57270 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 22:54:41.834274   57270 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0410 22:54:41.936758   57270 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 22:54:42.038298   57270 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 22:54:42.229459   57270 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 22:54:42.230047   57270 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 22:54:42.233021   57270 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 22:54:42.236068   57270 out.go:204]   - Booting up control plane ...
	I0410 22:54:42.236197   57270 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 22:54:42.236303   57270 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 22:54:42.236421   57270 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 22:54:42.255487   57270 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 22:54:42.256345   57270 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 22:54:42.256450   57270 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 22:54:42.391623   57270 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0410 22:54:42.391736   57270 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0410 22:54:43.393825   57270 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.00265832s
	I0410 22:54:43.393973   57270 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0410 22:54:43.156141   57719 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.488487447s)
	I0410 22:54:43.156227   57719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:54:43.170709   57719 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:54:43.180624   57719 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:54:43.180647   57719 kubeadm.go:156] found existing configuration files:
	
	I0410 22:54:43.180701   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:54:43.190482   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:54:43.190533   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:54:43.200261   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:54:43.210061   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:54:43.210116   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:54:43.220430   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:54:43.230810   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:54:43.230877   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:54:43.241141   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:54:43.251043   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:54:43.251111   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:54:43.261163   57719 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 22:54:43.534002   57719 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 22:54:48.398196   57270 kubeadm.go:309] [api-check] The API server is healthy after 5.002218646s
	I0410 22:54:48.410618   57270 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0410 22:54:48.430553   57270 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0410 22:54:48.465343   57270 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0410 22:54:48.465614   57270 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-646133 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0410 22:54:48.489066   57270 kubeadm.go:309] [bootstrap-token] Using token: 14xwwp.uyth37qsjfn0mpcx
	I0410 22:54:48.490984   57270 out.go:204]   - Configuring RBAC rules ...
	I0410 22:54:48.491116   57270 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0410 22:54:48.502789   57270 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0410 22:54:48.516871   57270 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0410 22:54:48.523600   57270 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0410 22:54:48.527939   57270 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0410 22:54:48.537216   57270 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0410 22:54:48.806350   57270 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0410 22:54:49.234618   57270 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0410 22:54:49.803640   57270 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0410 22:54:49.804948   57270 kubeadm.go:309] 
	I0410 22:54:49.805074   57270 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0410 22:54:49.805095   57270 kubeadm.go:309] 
	I0410 22:54:49.805194   57270 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0410 22:54:49.805209   57270 kubeadm.go:309] 
	I0410 22:54:49.805240   57270 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0410 22:54:49.805323   57270 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0410 22:54:49.805403   57270 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0410 22:54:49.805415   57270 kubeadm.go:309] 
	I0410 22:54:49.805482   57270 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0410 22:54:49.805489   57270 kubeadm.go:309] 
	I0410 22:54:49.805562   57270 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0410 22:54:49.805580   57270 kubeadm.go:309] 
	I0410 22:54:49.805646   57270 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0410 22:54:49.805781   57270 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0410 22:54:49.805888   57270 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0410 22:54:49.805901   57270 kubeadm.go:309] 
	I0410 22:54:49.806038   57270 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0410 22:54:49.806143   57270 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0410 22:54:49.806154   57270 kubeadm.go:309] 
	I0410 22:54:49.806262   57270 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 14xwwp.uyth37qsjfn0mpcx \
	I0410 22:54:49.806398   57270 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f23510b5ac28f8f42919becc6f4945068a05a3fcf79470ca514b0af3de7a2bb0 \
	I0410 22:54:49.806438   57270 kubeadm.go:309] 	--control-plane 
	I0410 22:54:49.806456   57270 kubeadm.go:309] 
	I0410 22:54:49.806565   57270 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0410 22:54:49.806581   57270 kubeadm.go:309] 
	I0410 22:54:49.806661   57270 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 14xwwp.uyth37qsjfn0mpcx \
	I0410 22:54:49.806777   57270 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f23510b5ac28f8f42919becc6f4945068a05a3fcf79470ca514b0af3de7a2bb0 
	I0410 22:54:49.808385   57270 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 22:54:49.808455   57270 cni.go:84] Creating CNI manager for ""
	I0410 22:54:49.808473   57270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:54:49.811276   57270 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0410 22:54:49.812840   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0410 22:54:49.829865   57270 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0410 22:54:49.854383   57270 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0410 22:54:49.854454   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:49.854456   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-646133 minikube.k8s.io/updated_at=2024_04_10T22_54_49_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2 minikube.k8s.io/name=no-preload-646133 minikube.k8s.io/primary=true
	I0410 22:54:49.888254   57270 ops.go:34] apiserver oom_adj: -16
	I0410 22:54:50.073922   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:50.574248   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:51.074134   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:51.574654   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:52.074970   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:52.574248   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:53.074799   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:53.574902   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:54.074695   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:54.574038   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:55.074975   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:55.574297   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:56.074490   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:56.574490   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:57.074280   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:57.574569   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:58.074654   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:58.574740   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:59.074630   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:59.574546   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:00.075044   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:00.574740   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:01.074961   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:01.574004   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:02.074121   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:02.574476   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:02.705604   57270 kubeadm.go:1107] duration metric: took 12.851213125s to wait for elevateKubeSystemPrivileges
	W0410 22:55:02.705636   57270 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0410 22:55:02.705644   57270 kubeadm.go:393] duration metric: took 5m16.306442396s to StartCluster
	I0410 22:55:02.705660   57270 settings.go:142] acquiring lock: {Name:mk5dc8e9a07a91433645b19ffba859d70c73be71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:55:02.705739   57270 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:55:02.707592   57270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/kubeconfig: {Name:mkd6f498bec10d7b0d2291f83f5e27766227bc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:55:02.707844   57270 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.17 Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0410 22:55:02.709479   57270 out.go:177] * Verifying Kubernetes components...
	I0410 22:55:02.707944   57270 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0410 22:55:02.708074   57270 config.go:182] Loaded profile config "no-preload-646133": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.1
	I0410 22:55:02.710816   57270 addons.go:69] Setting storage-provisioner=true in profile "no-preload-646133"
	I0410 22:55:02.710827   57270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:55:02.710854   57270 addons.go:234] Setting addon storage-provisioner=true in "no-preload-646133"
	W0410 22:55:02.710865   57270 addons.go:243] addon storage-provisioner should already be in state true
	I0410 22:55:02.710889   57270 host.go:66] Checking if "no-preload-646133" exists ...
	I0410 22:55:02.710819   57270 addons.go:69] Setting default-storageclass=true in profile "no-preload-646133"
	I0410 22:55:02.710975   57270 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-646133"
	I0410 22:55:02.710821   57270 addons.go:69] Setting metrics-server=true in profile "no-preload-646133"
	I0410 22:55:02.711079   57270 addons.go:234] Setting addon metrics-server=true in "no-preload-646133"
	W0410 22:55:02.711090   57270 addons.go:243] addon metrics-server should already be in state true
	I0410 22:55:02.711119   57270 host.go:66] Checking if "no-preload-646133" exists ...
	I0410 22:55:02.711325   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.711349   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.711352   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.711382   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.711486   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.711507   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.729696   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44631
	I0410 22:55:02.730179   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.730725   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.730751   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.731138   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35903
	I0410 22:55:02.731161   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43589
	I0410 22:55:02.731223   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.731532   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.731551   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.731920   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.731951   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.732083   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.732103   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.732266   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.732290   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.732642   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.732692   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.732892   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetState
	I0410 22:55:02.733291   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.733336   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.737245   57270 addons.go:234] Setting addon default-storageclass=true in "no-preload-646133"
	W0410 22:55:02.737274   57270 addons.go:243] addon default-storageclass should already be in state true
	I0410 22:55:02.737304   57270 host.go:66] Checking if "no-preload-646133" exists ...
	I0410 22:55:02.737674   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.737710   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.749656   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40775
	I0410 22:55:02.750133   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.751030   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.751054   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.751467   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.751642   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetState
	I0410 22:55:02.752548   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37217
	I0410 22:55:02.753119   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.753727   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:55:02.753903   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.753918   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.755963   57270 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0410 22:55:02.754443   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.757499   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42427
	I0410 22:55:02.757548   57270 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0410 22:55:02.757559   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0410 22:55:02.757576   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:55:02.757684   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetState
	I0410 22:55:02.758428   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.758880   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.758893   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.759783   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.760197   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.760224   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.760379   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:55:02.762291   57270 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:55:02.761210   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.761741   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:55:02.763819   57270 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:55:02.763907   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0410 22:55:02.763918   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:55:02.763841   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:55:02.763963   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.764040   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:55:02.764153   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:55:02.764239   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:55:02.767729   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.767758   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:55:02.767776   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.767730   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:55:02.767951   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:55:02.768100   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:55:02.768223   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:55:02.782788   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39421
	I0410 22:55:02.783161   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.783701   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.783726   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.784081   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.784347   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetState
	I0410 22:55:02.785932   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:55:02.786186   57270 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0410 22:55:02.786200   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0410 22:55:02.786217   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:55:02.789193   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.789526   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:55:02.789576   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.789837   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:55:02.790096   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:55:02.790278   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:55:02.790431   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:55:02.922239   57270 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:55:02.957665   57270 node_ready.go:35] waiting up to 6m0s for node "no-preload-646133" to be "Ready" ...
	I0410 22:55:02.981427   57270 node_ready.go:49] node "no-preload-646133" has status "Ready":"True"
	I0410 22:55:02.981449   57270 node_ready.go:38] duration metric: took 23.75134ms for node "no-preload-646133" to be "Ready" ...
	I0410 22:55:02.981458   57270 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:55:02.986557   57270 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jm2zw" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:03.024992   57270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:55:03.032744   57270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0410 22:55:03.156968   57270 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0410 22:55:03.156989   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0410 22:55:03.237497   57270 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0410 22:55:03.237522   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0410 22:55:03.274982   57270 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:55:03.275005   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0410 22:55:03.317464   57270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:55:03.512107   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.512130   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.512173   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.512198   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.512435   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.512455   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.512525   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.512530   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.512541   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.512542   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.512538   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.512551   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.512558   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.512497   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.512782   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.512799   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.512876   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.512915   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.512878   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.525688   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.525707   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.526017   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.526042   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.526057   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.905597   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.905627   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.906016   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.906081   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.906089   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.906101   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.906107   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.906353   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.906355   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.906381   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.906392   57270 addons.go:470] Verifying addon metrics-server=true in "no-preload-646133"
	I0410 22:55:03.908467   57270 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0410 22:55:03.910238   57270 addons.go:505] duration metric: took 1.20230017s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0410 22:55:05.035855   57270 pod_ready.go:102] pod "coredns-7db6d8ff4d-jm2zw" in "kube-system" namespace has status "Ready":"False"
	I0410 22:55:05.493330   57270 pod_ready.go:92] pod "coredns-7db6d8ff4d-jm2zw" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.493354   57270 pod_ready.go:81] duration metric: took 2.506773848s for pod "coredns-7db6d8ff4d-jm2zw" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.493365   57270 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-v599p" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.498568   57270 pod_ready.go:92] pod "coredns-7db6d8ff4d-v599p" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.498593   57270 pod_ready.go:81] duration metric: took 5.220548ms for pod "coredns-7db6d8ff4d-v599p" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.498604   57270 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.505133   57270 pod_ready.go:92] pod "etcd-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.505156   57270 pod_ready.go:81] duration metric: took 6.544104ms for pod "etcd-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.505165   57270 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.510391   57270 pod_ready.go:92] pod "kube-apiserver-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.510415   57270 pod_ready.go:81] duration metric: took 5.2417ms for pod "kube-apiserver-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.510427   57270 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.524717   57270 pod_ready.go:92] pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.524737   57270 pod_ready.go:81] duration metric: took 14.302445ms for pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.524747   57270 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-24vhc" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.891005   57270 pod_ready.go:92] pod "kube-proxy-24vhc" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.891029   57270 pod_ready.go:81] duration metric: took 366.275947ms for pod "kube-proxy-24vhc" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.891039   57270 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:06.291050   57270 pod_ready.go:92] pod "kube-scheduler-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:06.291075   57270 pod_ready.go:81] duration metric: took 400.028808ms for pod "kube-scheduler-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:06.291084   57270 pod_ready.go:38] duration metric: took 3.309617471s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:55:06.291101   57270 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:55:06.291165   57270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:55:06.308433   57270 api_server.go:72] duration metric: took 3.600549626s to wait for apiserver process to appear ...
	I0410 22:55:06.308461   57270 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:55:06.308479   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:55:06.312630   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 200:
	ok
	I0410 22:55:06.313434   57270 api_server.go:141] control plane version: v1.30.0-rc.1
	I0410 22:55:06.313457   57270 api_server.go:131] duration metric: took 4.989017ms to wait for apiserver health ...
	I0410 22:55:06.313466   57270 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:55:06.494780   57270 system_pods.go:59] 9 kube-system pods found
	I0410 22:55:06.494813   57270 system_pods.go:61] "coredns-7db6d8ff4d-jm2zw" [9d8b995c-717e-43a5-a963-f07a4f7a76a8] Running
	I0410 22:55:06.494820   57270 system_pods.go:61] "coredns-7db6d8ff4d-v599p" [f30c2827-5930-41d4-82b7-edfb839b3a74] Running
	I0410 22:55:06.494826   57270 system_pods.go:61] "etcd-no-preload-646133" [43f97c7f-c75c-4af4-80c1-11194210d8dd] Running
	I0410 22:55:06.494833   57270 system_pods.go:61] "kube-apiserver-no-preload-646133" [ca38242e-c714-49f7-a2df-3f26c6c37d44] Running
	I0410 22:55:06.494838   57270 system_pods.go:61] "kube-controller-manager-no-preload-646133" [a4c79943-eacf-46a5-b57a-f262c7dc97ef] Running
	I0410 22:55:06.494843   57270 system_pods.go:61] "kube-proxy-24vhc" [ca175e85-76f2-47d2-91a5-0248194a88e8] Running
	I0410 22:55:06.494848   57270 system_pods.go:61] "kube-scheduler-no-preload-646133" [fb5f38f5-0c9d-4176-8b3e-4d8c5f71c5cf] Running
	I0410 22:55:06.494856   57270 system_pods.go:61] "metrics-server-569cc877fc-bj59f" [4aace435-90be-456a-8a85-dbee0026212c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:55:06.494862   57270 system_pods.go:61] "storage-provisioner" [3232daa9-da88-4152-97c8-e86b3d50b0b8] Running
	I0410 22:55:06.494871   57270 system_pods.go:74] duration metric: took 181.399385ms to wait for pod list to return data ...
	I0410 22:55:06.494890   57270 default_sa.go:34] waiting for default service account to be created ...
	I0410 22:55:06.690158   57270 default_sa.go:45] found service account: "default"
	I0410 22:55:06.690185   57270 default_sa.go:55] duration metric: took 195.289153ms for default service account to be created ...
	I0410 22:55:06.690194   57270 system_pods.go:116] waiting for k8s-apps to be running ...
	I0410 22:55:06.893604   57270 system_pods.go:86] 9 kube-system pods found
	I0410 22:55:06.893632   57270 system_pods.go:89] "coredns-7db6d8ff4d-jm2zw" [9d8b995c-717e-43a5-a963-f07a4f7a76a8] Running
	I0410 22:55:06.893638   57270 system_pods.go:89] "coredns-7db6d8ff4d-v599p" [f30c2827-5930-41d4-82b7-edfb839b3a74] Running
	I0410 22:55:06.893642   57270 system_pods.go:89] "etcd-no-preload-646133" [43f97c7f-c75c-4af4-80c1-11194210d8dd] Running
	I0410 22:55:06.893646   57270 system_pods.go:89] "kube-apiserver-no-preload-646133" [ca38242e-c714-49f7-a2df-3f26c6c37d44] Running
	I0410 22:55:06.893651   57270 system_pods.go:89] "kube-controller-manager-no-preload-646133" [a4c79943-eacf-46a5-b57a-f262c7dc97ef] Running
	I0410 22:55:06.893656   57270 system_pods.go:89] "kube-proxy-24vhc" [ca175e85-76f2-47d2-91a5-0248194a88e8] Running
	I0410 22:55:06.893659   57270 system_pods.go:89] "kube-scheduler-no-preload-646133" [fb5f38f5-0c9d-4176-8b3e-4d8c5f71c5cf] Running
	I0410 22:55:06.893665   57270 system_pods.go:89] "metrics-server-569cc877fc-bj59f" [4aace435-90be-456a-8a85-dbee0026212c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:55:06.893670   57270 system_pods.go:89] "storage-provisioner" [3232daa9-da88-4152-97c8-e86b3d50b0b8] Running
	I0410 22:55:06.893679   57270 system_pods.go:126] duration metric: took 203.480657ms to wait for k8s-apps to be running ...
	I0410 22:55:06.893686   57270 system_svc.go:44] waiting for kubelet service to be running ....
	I0410 22:55:06.893730   57270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:55:06.909072   57270 system_svc.go:56] duration metric: took 15.374403ms WaitForService to wait for kubelet
	I0410 22:55:06.909096   57270 kubeadm.go:576] duration metric: took 4.20122533s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 22:55:06.909115   57270 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:55:07.090651   57270 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:55:07.090673   57270 node_conditions.go:123] node cpu capacity is 2
	I0410 22:55:07.090682   57270 node_conditions.go:105] duration metric: took 181.563241ms to run NodePressure ...
	I0410 22:55:07.090692   57270 start.go:240] waiting for startup goroutines ...
	I0410 22:55:07.090698   57270 start.go:245] waiting for cluster config update ...
	I0410 22:55:07.090707   57270 start.go:254] writing updated cluster config ...
	I0410 22:55:07.090957   57270 ssh_runner.go:195] Run: rm -f paused
	I0410 22:55:07.140644   57270 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.1 (minor skew: 1)
	I0410 22:55:07.142770   57270 out.go:177] * Done! kubectl is now configured to use "no-preload-646133" cluster and "default" namespace by default
	I0410 22:56:40.435994   57719 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0410 22:56:40.436123   57719 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0410 22:56:40.437810   57719 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0410 22:56:40.437872   57719 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 22:56:40.437967   57719 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 22:56:40.438082   57719 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 22:56:40.438235   57719 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 22:56:40.438321   57719 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 22:56:40.440009   57719 out.go:204]   - Generating certificates and keys ...
	I0410 22:56:40.440110   57719 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 22:56:40.440210   57719 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 22:56:40.440336   57719 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0410 22:56:40.440417   57719 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0410 22:56:40.440501   57719 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0410 22:56:40.440563   57719 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0410 22:56:40.440622   57719 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0410 22:56:40.440685   57719 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0410 22:56:40.440752   57719 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0410 22:56:40.440858   57719 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0410 22:56:40.440923   57719 kubeadm.go:309] [certs] Using the existing "sa" key
	I0410 22:56:40.441004   57719 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 22:56:40.441076   57719 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 22:56:40.441131   57719 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 22:56:40.441185   57719 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 22:56:40.441242   57719 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 22:56:40.441375   57719 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 22:56:40.441501   57719 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 22:56:40.441565   57719 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 22:56:40.441658   57719 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 22:56:40.443122   57719 out.go:204]   - Booting up control plane ...
	I0410 22:56:40.443230   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 22:56:40.443332   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 22:56:40.443431   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 22:56:40.443549   57719 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 22:56:40.443710   57719 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0410 22:56:40.443783   57719 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0410 22:56:40.443883   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.444111   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.444200   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.444429   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.444520   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.444761   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.444869   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.445124   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.445235   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.445416   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.445423   57719 kubeadm.go:309] 
	I0410 22:56:40.445465   57719 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0410 22:56:40.445512   57719 kubeadm.go:309] 		timed out waiting for the condition
	I0410 22:56:40.445520   57719 kubeadm.go:309] 
	I0410 22:56:40.445548   57719 kubeadm.go:309] 	This error is likely caused by:
	I0410 22:56:40.445595   57719 kubeadm.go:309] 		- The kubelet is not running
	I0410 22:56:40.445712   57719 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0410 22:56:40.445722   57719 kubeadm.go:309] 
	I0410 22:56:40.445880   57719 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0410 22:56:40.445931   57719 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0410 22:56:40.445967   57719 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0410 22:56:40.445972   57719 kubeadm.go:309] 
	I0410 22:56:40.446095   57719 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0410 22:56:40.446190   57719 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0410 22:56:40.446201   57719 kubeadm.go:309] 
	I0410 22:56:40.446326   57719 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0410 22:56:40.446452   57719 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0410 22:56:40.446548   57719 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0410 22:56:40.446611   57719 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0410 22:56:40.446659   57719 kubeadm.go:309] 
	I0410 22:56:40.446681   57719 kubeadm.go:393] duration metric: took 8m5.163157284s to StartCluster
	I0410 22:56:40.446805   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:56:40.446880   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:56:40.499163   57719 cri.go:89] found id: ""
	I0410 22:56:40.499196   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.499205   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:56:40.499212   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:56:40.499292   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:56:40.545429   57719 cri.go:89] found id: ""
	I0410 22:56:40.545465   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.545473   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:56:40.545479   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:56:40.545538   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:56:40.583842   57719 cri.go:89] found id: ""
	I0410 22:56:40.583870   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.583880   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:56:40.583887   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:56:40.583957   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:56:40.621054   57719 cri.go:89] found id: ""
	I0410 22:56:40.621075   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.621083   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:56:40.621091   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:56:40.621149   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:56:40.665133   57719 cri.go:89] found id: ""
	I0410 22:56:40.665161   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.665168   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:56:40.665175   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:56:40.665231   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:56:40.707490   57719 cri.go:89] found id: ""
	I0410 22:56:40.707519   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.707529   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:56:40.707536   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:56:40.707598   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:56:40.748539   57719 cri.go:89] found id: ""
	I0410 22:56:40.748565   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.748576   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:56:40.748584   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:56:40.748644   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:56:40.792326   57719 cri.go:89] found id: ""
	I0410 22:56:40.792349   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.792358   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:56:40.792366   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:56:40.792376   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:56:40.844309   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:56:40.844346   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:56:40.859678   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:56:40.859715   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:56:40.950099   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:56:40.950123   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:56:40.950141   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:56:41.073547   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:56:41.073589   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0410 22:56:41.124970   57719 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0410 22:56:41.125024   57719 out.go:239] * 
	W0410 22:56:41.125096   57719 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0410 22:56:41.125129   57719 out.go:239] * 
	W0410 22:56:41.126153   57719 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0410 22:56:41.129869   57719 out.go:177] 
	W0410 22:56:41.131207   57719 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0410 22:56:41.131286   57719 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0410 22:56:41.131326   57719 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0410 22:56:41.133049   57719 out.go:177] 
	
	
	==> CRI-O <==
	Apr 10 23:03:12 embed-certs-706500 crio[732]: time="2024-04-10 23:03:12.691477695Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712790192691445161,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b2fbb44c-c0b7-47b8-9e0b-040a5b066aaf name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:03:12 embed-certs-706500 crio[732]: time="2024-04-10 23:03:12.692599880Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=caf59d0d-aadf-4eb0-919b-24d63e51024f name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:03:12 embed-certs-706500 crio[732]: time="2024-04-10 23:03:12.692675398Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=caf59d0d-aadf-4eb0-919b-24d63e51024f name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:03:12 embed-certs-706500 crio[732]: time="2024-04-10 23:03:12.692939679Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20860098a53a6c38bbb6118735789916a226b29170ef73a5f59b788e3e789d62,PodSandboxId:49425f3f0f3f6b7a6e493aff156f5590f340a93172980c09d58f0508792c2d4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712789647852063589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ad8e533-69ca-4eb5-9595-e6808dc0ff1a,},Annotations:map[string]string{io.kubernetes.container.hash: 9a77a63,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acb30f5e43a4c16269f8f5d80af70f51e68db7156d39cda88be08c09fc0b9603,PodSandboxId:b4dfdda9ca2105236b568781ee16a193ab337538af7a4e04193548a16506b913,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789647278456064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-bvdp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc8a326-77ef-469f-abf7-082ff8a44782,},Annotations:map[string]string{io.kubernetes.container.hash: 208fdcc3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9f7f18b77ab56e9facff46ac6daf77efb0725a434223643e10a22781c14a97,PodSandboxId:512ee3eeb792f8dbaddf11a5fcd68cb8fdab38d3bab0523f27f9851604d9d3e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789647136256389,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v2pp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21
38fb5e-9c16-4a25-85d3-3d84b361a1e8,},Annotations:map[string]string{io.kubernetes.container.hash: 39288f10,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e85003fcda80065ba08ce664b39389c139e522b6fa6d3d549aa1489480769ba,PodSandboxId:2e331650860759f26b5bfc40e8dd29b524d4d7e6a670b8968c91b07752fc587b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:
1712789646301595587,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xj5nq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1bb1878-3e4b-4647-a3a7-cb327ccbd364,},Annotations:map[string]string{io.kubernetes.container.hash: e6089a25,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24c4592bdae762071d9e3194f77a18c18a4e9892287473579e8949b855399bb7,PodSandboxId:3c76004c49eee60d1bc73391f13acef54ae33d676fb055852d74b0e044507385,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712789626759076112,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ade9b0541b33ae26f2058c883c3798,},Annotations:map[string]string{io.kubernetes.container.hash: 5f81a59e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a08ba1e285082b3e8168a800dbcfdffb0730b5e9ae2f5ca7dd4a1e41cbe5d061,PodSandboxId:f51fc7c43757ebf9dc411563a65a86b34eba8ebc9c77cfe96624c6f261c56db0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712789626718096771,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cc6ba0b7c555727afeeda8fec9bc199,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f384cebb9db6a30cc358c386a5336d6d9de64f99fc0ab767580c8cda15b52f2,PodSandboxId:ae9380b9c2fba691c02a84120e8c0b8c16e9329a3f93d2dfdd23a285f9dd72bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712789626709682024,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6695da719563f5e9d31d5ac8cc82cbd6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dd5e113a3c19da2d6de252551db5e40ec3162ff53e7078636fb2903d568adbf,PodSandboxId:31dc2b0b704c001485223edc854b0f80661499793a947799fc2c13cd5cdee36b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712789626639730707,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0f7dfba43914b0d24c75b7149fd03d7,},Annotations:map[string]string{io.kubernetes.container.hash: b8446a68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdb4f1cb7f324a695caba4a74fdffb456b9b22f56f2a3883880ec4686227e507,PodSandboxId:a12df2a5ab1a88cfc09ae4dc1bf2a27a1ef57e0dae98c6e07ecfd0292765950f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712789335198838226,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0f7dfba43914b0d24c75b7149fd03d7,},Annotations:map[string]string{io.kubernetes.container.hash: b8446a68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=caf59d0d-aadf-4eb0-919b-24d63e51024f name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:03:12 embed-certs-706500 crio[732]: time="2024-04-10 23:03:12.741134162Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b03c03bd-2473-43db-beb2-c8afc2fe50b2 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:03:12 embed-certs-706500 crio[732]: time="2024-04-10 23:03:12.741203227Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b03c03bd-2473-43db-beb2-c8afc2fe50b2 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:03:12 embed-certs-706500 crio[732]: time="2024-04-10 23:03:12.742417960Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8d6147c4-6891-47c9-bee1-e42d3ff9ebdd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:03:12 embed-certs-706500 crio[732]: time="2024-04-10 23:03:12.742960546Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712790192742935348,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8d6147c4-6891-47c9-bee1-e42d3ff9ebdd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:03:12 embed-certs-706500 crio[732]: time="2024-04-10 23:03:12.744058096Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ac39997-c478-4b00-b6f6-ab7d9112e434 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:03:12 embed-certs-706500 crio[732]: time="2024-04-10 23:03:12.744111964Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5ac39997-c478-4b00-b6f6-ab7d9112e434 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:03:12 embed-certs-706500 crio[732]: time="2024-04-10 23:03:12.744295675Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20860098a53a6c38bbb6118735789916a226b29170ef73a5f59b788e3e789d62,PodSandboxId:49425f3f0f3f6b7a6e493aff156f5590f340a93172980c09d58f0508792c2d4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712789647852063589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ad8e533-69ca-4eb5-9595-e6808dc0ff1a,},Annotations:map[string]string{io.kubernetes.container.hash: 9a77a63,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acb30f5e43a4c16269f8f5d80af70f51e68db7156d39cda88be08c09fc0b9603,PodSandboxId:b4dfdda9ca2105236b568781ee16a193ab337538af7a4e04193548a16506b913,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789647278456064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-bvdp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc8a326-77ef-469f-abf7-082ff8a44782,},Annotations:map[string]string{io.kubernetes.container.hash: 208fdcc3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9f7f18b77ab56e9facff46ac6daf77efb0725a434223643e10a22781c14a97,PodSandboxId:512ee3eeb792f8dbaddf11a5fcd68cb8fdab38d3bab0523f27f9851604d9d3e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789647136256389,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v2pp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21
38fb5e-9c16-4a25-85d3-3d84b361a1e8,},Annotations:map[string]string{io.kubernetes.container.hash: 39288f10,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e85003fcda80065ba08ce664b39389c139e522b6fa6d3d549aa1489480769ba,PodSandboxId:2e331650860759f26b5bfc40e8dd29b524d4d7e6a670b8968c91b07752fc587b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:
1712789646301595587,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xj5nq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1bb1878-3e4b-4647-a3a7-cb327ccbd364,},Annotations:map[string]string{io.kubernetes.container.hash: e6089a25,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24c4592bdae762071d9e3194f77a18c18a4e9892287473579e8949b855399bb7,PodSandboxId:3c76004c49eee60d1bc73391f13acef54ae33d676fb055852d74b0e044507385,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712789626759076112,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ade9b0541b33ae26f2058c883c3798,},Annotations:map[string]string{io.kubernetes.container.hash: 5f81a59e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a08ba1e285082b3e8168a800dbcfdffb0730b5e9ae2f5ca7dd4a1e41cbe5d061,PodSandboxId:f51fc7c43757ebf9dc411563a65a86b34eba8ebc9c77cfe96624c6f261c56db0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712789626718096771,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cc6ba0b7c555727afeeda8fec9bc199,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f384cebb9db6a30cc358c386a5336d6d9de64f99fc0ab767580c8cda15b52f2,PodSandboxId:ae9380b9c2fba691c02a84120e8c0b8c16e9329a3f93d2dfdd23a285f9dd72bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712789626709682024,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6695da719563f5e9d31d5ac8cc82cbd6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dd5e113a3c19da2d6de252551db5e40ec3162ff53e7078636fb2903d568adbf,PodSandboxId:31dc2b0b704c001485223edc854b0f80661499793a947799fc2c13cd5cdee36b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712789626639730707,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0f7dfba43914b0d24c75b7149fd03d7,},Annotations:map[string]string{io.kubernetes.container.hash: b8446a68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdb4f1cb7f324a695caba4a74fdffb456b9b22f56f2a3883880ec4686227e507,PodSandboxId:a12df2a5ab1a88cfc09ae4dc1bf2a27a1ef57e0dae98c6e07ecfd0292765950f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712789335198838226,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0f7dfba43914b0d24c75b7149fd03d7,},Annotations:map[string]string{io.kubernetes.container.hash: b8446a68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5ac39997-c478-4b00-b6f6-ab7d9112e434 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:03:12 embed-certs-706500 crio[732]: time="2024-04-10 23:03:12.787203257Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b900a63a-7ae8-4e77-b834-47742f95d4c6 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:03:12 embed-certs-706500 crio[732]: time="2024-04-10 23:03:12.787271197Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b900a63a-7ae8-4e77-b834-47742f95d4c6 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:03:12 embed-certs-706500 crio[732]: time="2024-04-10 23:03:12.789017306Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=19debe19-09d3-474c-81f7-3f488cc96d5f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:03:12 embed-certs-706500 crio[732]: time="2024-04-10 23:03:12.789678468Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712790192789645754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=19debe19-09d3-474c-81f7-3f488cc96d5f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:03:12 embed-certs-706500 crio[732]: time="2024-04-10 23:03:12.790478853Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4e4e2db0-2587-45a5-b5ee-1ce0fd4f2175 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:03:12 embed-certs-706500 crio[732]: time="2024-04-10 23:03:12.790626370Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4e4e2db0-2587-45a5-b5ee-1ce0fd4f2175 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:03:12 embed-certs-706500 crio[732]: time="2024-04-10 23:03:12.790994669Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20860098a53a6c38bbb6118735789916a226b29170ef73a5f59b788e3e789d62,PodSandboxId:49425f3f0f3f6b7a6e493aff156f5590f340a93172980c09d58f0508792c2d4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712789647852063589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ad8e533-69ca-4eb5-9595-e6808dc0ff1a,},Annotations:map[string]string{io.kubernetes.container.hash: 9a77a63,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acb30f5e43a4c16269f8f5d80af70f51e68db7156d39cda88be08c09fc0b9603,PodSandboxId:b4dfdda9ca2105236b568781ee16a193ab337538af7a4e04193548a16506b913,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789647278456064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-bvdp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc8a326-77ef-469f-abf7-082ff8a44782,},Annotations:map[string]string{io.kubernetes.container.hash: 208fdcc3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9f7f18b77ab56e9facff46ac6daf77efb0725a434223643e10a22781c14a97,PodSandboxId:512ee3eeb792f8dbaddf11a5fcd68cb8fdab38d3bab0523f27f9851604d9d3e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789647136256389,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v2pp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21
38fb5e-9c16-4a25-85d3-3d84b361a1e8,},Annotations:map[string]string{io.kubernetes.container.hash: 39288f10,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e85003fcda80065ba08ce664b39389c139e522b6fa6d3d549aa1489480769ba,PodSandboxId:2e331650860759f26b5bfc40e8dd29b524d4d7e6a670b8968c91b07752fc587b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:
1712789646301595587,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xj5nq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1bb1878-3e4b-4647-a3a7-cb327ccbd364,},Annotations:map[string]string{io.kubernetes.container.hash: e6089a25,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24c4592bdae762071d9e3194f77a18c18a4e9892287473579e8949b855399bb7,PodSandboxId:3c76004c49eee60d1bc73391f13acef54ae33d676fb055852d74b0e044507385,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712789626759076112,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ade9b0541b33ae26f2058c883c3798,},Annotations:map[string]string{io.kubernetes.container.hash: 5f81a59e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a08ba1e285082b3e8168a800dbcfdffb0730b5e9ae2f5ca7dd4a1e41cbe5d061,PodSandboxId:f51fc7c43757ebf9dc411563a65a86b34eba8ebc9c77cfe96624c6f261c56db0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712789626718096771,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cc6ba0b7c555727afeeda8fec9bc199,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f384cebb9db6a30cc358c386a5336d6d9de64f99fc0ab767580c8cda15b52f2,PodSandboxId:ae9380b9c2fba691c02a84120e8c0b8c16e9329a3f93d2dfdd23a285f9dd72bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712789626709682024,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6695da719563f5e9d31d5ac8cc82cbd6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dd5e113a3c19da2d6de252551db5e40ec3162ff53e7078636fb2903d568adbf,PodSandboxId:31dc2b0b704c001485223edc854b0f80661499793a947799fc2c13cd5cdee36b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712789626639730707,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0f7dfba43914b0d24c75b7149fd03d7,},Annotations:map[string]string{io.kubernetes.container.hash: b8446a68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdb4f1cb7f324a695caba4a74fdffb456b9b22f56f2a3883880ec4686227e507,PodSandboxId:a12df2a5ab1a88cfc09ae4dc1bf2a27a1ef57e0dae98c6e07ecfd0292765950f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712789335198838226,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0f7dfba43914b0d24c75b7149fd03d7,},Annotations:map[string]string{io.kubernetes.container.hash: b8446a68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4e4e2db0-2587-45a5-b5ee-1ce0fd4f2175 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:03:12 embed-certs-706500 crio[732]: time="2024-04-10 23:03:12.827013943Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2fb9db10-d9d9-4633-a3a8-9a52957bc7e1 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:03:12 embed-certs-706500 crio[732]: time="2024-04-10 23:03:12.827088344Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2fb9db10-d9d9-4633-a3a8-9a52957bc7e1 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:03:12 embed-certs-706500 crio[732]: time="2024-04-10 23:03:12.829209000Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1e415b52-18dd-42a7-bfd8-b8f53f33733c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:03:12 embed-certs-706500 crio[732]: time="2024-04-10 23:03:12.829656263Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712790192829633378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1e415b52-18dd-42a7-bfd8-b8f53f33733c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:03:12 embed-certs-706500 crio[732]: time="2024-04-10 23:03:12.830617527Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c497147-ff8b-4a0e-90b0-34981c5dca5f name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:03:12 embed-certs-706500 crio[732]: time="2024-04-10 23:03:12.830669813Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c497147-ff8b-4a0e-90b0-34981c5dca5f name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:03:12 embed-certs-706500 crio[732]: time="2024-04-10 23:03:12.831126713Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20860098a53a6c38bbb6118735789916a226b29170ef73a5f59b788e3e789d62,PodSandboxId:49425f3f0f3f6b7a6e493aff156f5590f340a93172980c09d58f0508792c2d4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712789647852063589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ad8e533-69ca-4eb5-9595-e6808dc0ff1a,},Annotations:map[string]string{io.kubernetes.container.hash: 9a77a63,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acb30f5e43a4c16269f8f5d80af70f51e68db7156d39cda88be08c09fc0b9603,PodSandboxId:b4dfdda9ca2105236b568781ee16a193ab337538af7a4e04193548a16506b913,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789647278456064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-bvdp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc8a326-77ef-469f-abf7-082ff8a44782,},Annotations:map[string]string{io.kubernetes.container.hash: 208fdcc3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9f7f18b77ab56e9facff46ac6daf77efb0725a434223643e10a22781c14a97,PodSandboxId:512ee3eeb792f8dbaddf11a5fcd68cb8fdab38d3bab0523f27f9851604d9d3e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789647136256389,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v2pp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21
38fb5e-9c16-4a25-85d3-3d84b361a1e8,},Annotations:map[string]string{io.kubernetes.container.hash: 39288f10,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e85003fcda80065ba08ce664b39389c139e522b6fa6d3d549aa1489480769ba,PodSandboxId:2e331650860759f26b5bfc40e8dd29b524d4d7e6a670b8968c91b07752fc587b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:
1712789646301595587,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xj5nq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1bb1878-3e4b-4647-a3a7-cb327ccbd364,},Annotations:map[string]string{io.kubernetes.container.hash: e6089a25,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24c4592bdae762071d9e3194f77a18c18a4e9892287473579e8949b855399bb7,PodSandboxId:3c76004c49eee60d1bc73391f13acef54ae33d676fb055852d74b0e044507385,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712789626759076112,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ade9b0541b33ae26f2058c883c3798,},Annotations:map[string]string{io.kubernetes.container.hash: 5f81a59e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a08ba1e285082b3e8168a800dbcfdffb0730b5e9ae2f5ca7dd4a1e41cbe5d061,PodSandboxId:f51fc7c43757ebf9dc411563a65a86b34eba8ebc9c77cfe96624c6f261c56db0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712789626718096771,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cc6ba0b7c555727afeeda8fec9bc199,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f384cebb9db6a30cc358c386a5336d6d9de64f99fc0ab767580c8cda15b52f2,PodSandboxId:ae9380b9c2fba691c02a84120e8c0b8c16e9329a3f93d2dfdd23a285f9dd72bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712789626709682024,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6695da719563f5e9d31d5ac8cc82cbd6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dd5e113a3c19da2d6de252551db5e40ec3162ff53e7078636fb2903d568adbf,PodSandboxId:31dc2b0b704c001485223edc854b0f80661499793a947799fc2c13cd5cdee36b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712789626639730707,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0f7dfba43914b0d24c75b7149fd03d7,},Annotations:map[string]string{io.kubernetes.container.hash: b8446a68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdb4f1cb7f324a695caba4a74fdffb456b9b22f56f2a3883880ec4686227e507,PodSandboxId:a12df2a5ab1a88cfc09ae4dc1bf2a27a1ef57e0dae98c6e07ecfd0292765950f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712789335198838226,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0f7dfba43914b0d24c75b7149fd03d7,},Annotations:map[string]string{io.kubernetes.container.hash: b8446a68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c497147-ff8b-4a0e-90b0-34981c5dca5f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	20860098a53a6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   49425f3f0f3f6       storage-provisioner
	acb30f5e43a4c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   b4dfdda9ca210       coredns-76f75df574-bvdp5
	5b9f7f18b77ab       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   512ee3eeb792f       coredns-76f75df574-v2pp5
	8e85003fcda80       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   9 minutes ago       Running             kube-proxy                0                   2e33165086075       kube-proxy-xj5nq
	24c4592bdae76       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   3c76004c49eee       etcd-embed-certs-706500
	a08ba1e285082       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   9 minutes ago       Running             kube-scheduler            2                   f51fc7c43757e       kube-scheduler-embed-certs-706500
	5f384cebb9db6       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   9 minutes ago       Running             kube-controller-manager   2                   ae9380b9c2fba       kube-controller-manager-embed-certs-706500
	4dd5e113a3c19       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   9 minutes ago       Running             kube-apiserver            2                   31dc2b0b704c0       kube-apiserver-embed-certs-706500
	bdb4f1cb7f324       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   14 minutes ago      Exited              kube-apiserver            1                   a12df2a5ab1a8       kube-apiserver-embed-certs-706500
	
	
	==> coredns [5b9f7f18b77ab56e9facff46ac6daf77efb0725a434223643e10a22781c14a97] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [acb30f5e43a4c16269f8f5d80af70f51e68db7156d39cda88be08c09fc0b9603] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-706500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-706500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2
	                    minikube.k8s.io/name=embed-certs-706500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_10T22_53_53_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Apr 2024 22:53:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-706500
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Apr 2024 23:03:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Apr 2024 22:59:20 +0000   Wed, 10 Apr 2024 22:53:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Apr 2024 22:59:20 +0000   Wed, 10 Apr 2024 22:53:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Apr 2024 22:59:20 +0000   Wed, 10 Apr 2024 22:53:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Apr 2024 22:59:20 +0000   Wed, 10 Apr 2024 22:54:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.10
	  Hostname:    embed-certs-706500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 65039216ea2f4d04bceb173695a31972
	  System UUID:                65039216-ea2f-4d04-bceb-173695a31972
	  Boot ID:                    50e06d99-b932-43cf-af18-fddcec0b4877
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-bvdp5                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 coredns-76f75df574-v2pp5                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 etcd-embed-certs-706500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-apiserver-embed-certs-706500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-controller-manager-embed-certs-706500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-proxy-xj5nq                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  kube-system                 kube-scheduler-embed-certs-706500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-57f55c9bc5-9mrmz               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m6s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m27s (x8 over 9m27s)  kubelet          Node embed-certs-706500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m27s (x8 over 9m27s)  kubelet          Node embed-certs-706500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m27s (x7 over 9m27s)  kubelet          Node embed-certs-706500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m20s                  kubelet          Node embed-certs-706500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s                  kubelet          Node embed-certs-706500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s                  kubelet          Node embed-certs-706500 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m20s                  kubelet          Node embed-certs-706500 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m20s                  kubelet          Starting kubelet.
	  Normal  NodeReady                9m10s                  kubelet          Node embed-certs-706500 status is now: NodeReady
	  Normal  RegisteredNode           9m9s                   node-controller  Node embed-certs-706500 event: Registered Node embed-certs-706500 in Controller
	
	
	==> dmesg <==
	[  +0.054628] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043256] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.758871] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.697367] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.656672] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.610339] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.059412] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062250] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.191066] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.162016] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[  +0.349085] systemd-fstab-generator[716]: Ignoring "noauto" option for root device
	[  +4.721413] systemd-fstab-generator[813]: Ignoring "noauto" option for root device
	[  +0.066100] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.853437] systemd-fstab-generator[937]: Ignoring "noauto" option for root device
	[  +5.636785] kauditd_printk_skb: 97 callbacks suppressed
	[Apr10 22:49] kauditd_printk_skb: 81 callbacks suppressed
	[Apr10 22:53] systemd-fstab-generator[3594]: Ignoring "noauto" option for root device
	[  +0.064419] kauditd_printk_skb: 8 callbacks suppressed
	[  +7.253364] systemd-fstab-generator[3915]: Ignoring "noauto" option for root device
	[  +0.092526] kauditd_printk_skb: 54 callbacks suppressed
	[Apr10 22:54] systemd-fstab-generator[4127]: Ignoring "noauto" option for root device
	[  +0.104790] kauditd_printk_skb: 12 callbacks suppressed
	[Apr10 22:55] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [24c4592bdae762071d9e3194f77a18c18a4e9892287473579e8949b855399bb7] <==
	{"level":"info","ts":"2024-04-10T22:53:47.458047Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e switched to configuration voters=(17911497232019635470)"}
	{"level":"info","ts":"2024-04-10T22:53:47.461075Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a710b3f69152e32","local-member-id":"f8926bd555ec3d0e","added-peer-id":"f8926bd555ec3d0e","added-peer-peer-urls":["https://192.168.39.10:2380"]}
	{"level":"info","ts":"2024-04-10T22:53:47.489325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-10T22:53:47.489428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-10T22:53:47.489837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e received MsgPreVoteResp from f8926bd555ec3d0e at term 1"}
	{"level":"info","ts":"2024-04-10T22:53:47.489875Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e became candidate at term 2"}
	{"level":"info","ts":"2024-04-10T22:53:47.490052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e received MsgVoteResp from f8926bd555ec3d0e at term 2"}
	{"level":"info","ts":"2024-04-10T22:53:47.490085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e became leader at term 2"}
	{"level":"info","ts":"2024-04-10T22:53:47.490272Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f8926bd555ec3d0e elected leader f8926bd555ec3d0e at term 2"}
	{"level":"info","ts":"2024-04-10T22:53:47.493785Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-10T22:53:47.496928Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f8926bd555ec3d0e","local-member-attributes":"{Name:embed-certs-706500 ClientURLs:[https://192.168.39.10:2379]}","request-path":"/0/members/f8926bd555ec3d0e/attributes","cluster-id":"3a710b3f69152e32","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-10T22:53:47.49943Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a710b3f69152e32","local-member-id":"f8926bd555ec3d0e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-10T22:53:47.499667Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-10T22:53:47.499881Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-10T22:53:47.501402Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-10T22:53:47.507537Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2024-04-10T22:53:47.507835Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2024-04-10T22:53:47.509277Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-10T22:53:47.5112Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f8926bd555ec3d0e","initial-advertise-peer-urls":["https://192.168.39.10:2380"],"listen-peer-urls":["https://192.168.39.10:2380"],"advertise-client-urls":["https://192.168.39.10:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.10:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-10T22:53:47.511396Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-10T22:53:47.5188Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-10T22:53:47.528386Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-10T22:53:47.528452Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-10T22:53:47.548891Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-10T22:53:47.582826Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.10:2379"}
	
	
	==> kernel <==
	 23:03:13 up 14 min,  0 users,  load average: 0.03, 0.23, 0.22
	Linux embed-certs-706500 5.10.207 #1 SMP Wed Apr 10 14:57:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4dd5e113a3c19da2d6de252551db5e40ec3162ff53e7078636fb2903d568adbf] <==
	I0410 22:57:08.457066       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0410 22:58:49.608390       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 22:58:49.609217       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0410 22:58:50.609836       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 22:58:50.609888       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0410 22:58:50.609897       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0410 22:58:50.609961       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 22:58:50.610075       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0410 22:58:50.611319       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0410 22:59:50.610481       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 22:59:50.610579       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0410 22:59:50.610589       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0410 22:59:50.611637       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 22:59:50.611760       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0410 22:59:50.611812       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0410 23:01:50.610839       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 23:01:50.611149       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0410 23:01:50.611184       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0410 23:01:50.612842       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 23:01:50.612975       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0410 23:01:50.613006       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [bdb4f1cb7f324a695caba4a74fdffb456b9b22f56f2a3883880ec4686227e507] <==
	W0410 22:53:42.020816       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:42.123449       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:42.221930       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:42.340786       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:42.376251       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:42.397993       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:42.406160       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:42.436759       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:42.457748       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:42.479143       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:42.563277       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:42.630921       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:42.752971       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:42.817542       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:42.865904       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:42.894937       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:43.042689       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:43.079658       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:43.089799       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:43.148970       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:43.253604       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:43.279669       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:43.298940       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:43.322175       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:43.360739       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [5f384cebb9db6a30cc358c386a5336d6d9de64f99fc0ab767580c8cda15b52f2] <==
	I0410 22:57:37.237157       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="139.252µs"
	E0410 22:58:04.812646       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 22:58:05.298775       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 22:58:34.817634       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 22:58:35.307096       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 22:59:04.823712       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 22:59:05.316120       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 22:59:34.829294       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 22:59:35.324897       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:00:04.836597       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:00:05.333920       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0410 23:00:13.236549       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="330.374µs"
	I0410 23:00:25.240816       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="144.765µs"
	E0410 23:00:34.842841       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:00:35.342186       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:01:04.852119       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:01:05.351296       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:01:34.857526       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:01:35.361236       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:02:04.863171       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:02:05.369895       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:02:34.869945       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:02:35.379962       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:03:04.876415       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:03:05.389223       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [8e85003fcda80065ba08ce664b39389c139e522b6fa6d3d549aa1489480769ba] <==
	I0410 22:54:06.777170       1 server_others.go:72] "Using iptables proxy"
	I0410 22:54:06.793660       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.10"]
	I0410 22:54:06.865413       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0410 22:54:06.865440       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0410 22:54:06.865456       1 server_others.go:168] "Using iptables Proxier"
	I0410 22:54:06.868859       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0410 22:54:06.869088       1 server.go:865] "Version info" version="v1.29.3"
	I0410 22:54:06.869100       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0410 22:54:06.870247       1 config.go:188] "Starting service config controller"
	I0410 22:54:06.870268       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0410 22:54:06.870295       1 config.go:97] "Starting endpoint slice config controller"
	I0410 22:54:06.870299       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0410 22:54:06.870925       1 config.go:315] "Starting node config controller"
	I0410 22:54:06.870934       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0410 22:54:06.974552       1 shared_informer.go:318] Caches are synced for node config
	I0410 22:54:06.974580       1 shared_informer.go:318] Caches are synced for service config
	I0410 22:54:06.974606       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a08ba1e285082b3e8168a800dbcfdffb0730b5e9ae2f5ca7dd4a1e41cbe5d061] <==
	W0410 22:53:49.621113       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0410 22:53:49.621142       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0410 22:53:50.435056       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0410 22:53:50.435085       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0410 22:53:50.517034       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0410 22:53:50.519122       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0410 22:53:50.519320       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0410 22:53:50.519443       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0410 22:53:50.529574       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0410 22:53:50.529740       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0410 22:53:50.568183       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0410 22:53:50.568455       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0410 22:53:50.671143       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0410 22:53:50.671240       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0410 22:53:50.675036       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0410 22:53:50.675140       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0410 22:53:50.751237       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0410 22:53:50.751968       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0410 22:53:50.754817       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0410 22:53:50.754900       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0410 22:53:50.814059       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0410 22:53:50.814115       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0410 22:53:50.864163       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0410 22:53:50.864217       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0410 22:53:53.807070       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 10 23:00:53 embed-certs-706500 kubelet[3922]: E0410 23:00:53.322630    3922 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 10 23:00:53 embed-certs-706500 kubelet[3922]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 10 23:00:53 embed-certs-706500 kubelet[3922]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 10 23:00:53 embed-certs-706500 kubelet[3922]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 10 23:00:53 embed-certs-706500 kubelet[3922]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 10 23:01:06 embed-certs-706500 kubelet[3922]: E0410 23:01:06.217485    3922 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9mrmz" podUID="a4ccd29a-d27e-4291-ac8c-3135d65f8a2a"
	Apr 10 23:01:17 embed-certs-706500 kubelet[3922]: E0410 23:01:17.220308    3922 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9mrmz" podUID="a4ccd29a-d27e-4291-ac8c-3135d65f8a2a"
	Apr 10 23:01:31 embed-certs-706500 kubelet[3922]: E0410 23:01:31.217728    3922 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9mrmz" podUID="a4ccd29a-d27e-4291-ac8c-3135d65f8a2a"
	Apr 10 23:01:44 embed-certs-706500 kubelet[3922]: E0410 23:01:44.217986    3922 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9mrmz" podUID="a4ccd29a-d27e-4291-ac8c-3135d65f8a2a"
	Apr 10 23:01:53 embed-certs-706500 kubelet[3922]: E0410 23:01:53.323184    3922 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 10 23:01:53 embed-certs-706500 kubelet[3922]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 10 23:01:53 embed-certs-706500 kubelet[3922]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 10 23:01:53 embed-certs-706500 kubelet[3922]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 10 23:01:53 embed-certs-706500 kubelet[3922]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 10 23:01:56 embed-certs-706500 kubelet[3922]: E0410 23:01:56.217405    3922 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9mrmz" podUID="a4ccd29a-d27e-4291-ac8c-3135d65f8a2a"
	Apr 10 23:02:08 embed-certs-706500 kubelet[3922]: E0410 23:02:08.218149    3922 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9mrmz" podUID="a4ccd29a-d27e-4291-ac8c-3135d65f8a2a"
	Apr 10 23:02:22 embed-certs-706500 kubelet[3922]: E0410 23:02:22.217689    3922 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9mrmz" podUID="a4ccd29a-d27e-4291-ac8c-3135d65f8a2a"
	Apr 10 23:02:37 embed-certs-706500 kubelet[3922]: E0410 23:02:37.218426    3922 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9mrmz" podUID="a4ccd29a-d27e-4291-ac8c-3135d65f8a2a"
	Apr 10 23:02:50 embed-certs-706500 kubelet[3922]: E0410 23:02:50.218766    3922 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9mrmz" podUID="a4ccd29a-d27e-4291-ac8c-3135d65f8a2a"
	Apr 10 23:02:53 embed-certs-706500 kubelet[3922]: E0410 23:02:53.324886    3922 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 10 23:02:53 embed-certs-706500 kubelet[3922]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 10 23:02:53 embed-certs-706500 kubelet[3922]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 10 23:02:53 embed-certs-706500 kubelet[3922]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 10 23:02:53 embed-certs-706500 kubelet[3922]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 10 23:03:04 embed-certs-706500 kubelet[3922]: E0410 23:03:04.217716    3922 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9mrmz" podUID="a4ccd29a-d27e-4291-ac8c-3135d65f8a2a"
	
	
	==> storage-provisioner [20860098a53a6c38bbb6118735789916a226b29170ef73a5f59b788e3e789d62] <==
	I0410 22:54:07.984260       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0410 22:54:08.004272       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0410 22:54:08.004432       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0410 22:54:08.019626       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0410 22:54:08.019781       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-706500_ec7f311a-4e38-43a8-9919-a60191d3f5b0!
	I0410 22:54:08.022129       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"74ecf70d-9945-4265-84d4-8d8cdc02049d", APIVersion:"v1", ResourceVersion:"440", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-706500_ec7f311a-4e38-43a8-9919-a60191d3f5b0 became leader
	I0410 22:54:08.120458       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-706500_ec7f311a-4e38-43a8-9919-a60191d3f5b0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-706500 -n embed-certs-706500
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-706500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-9mrmz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-706500 describe pod metrics-server-57f55c9bc5-9mrmz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-706500 describe pod metrics-server-57f55c9bc5-9mrmz: exit status 1 (64.666865ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-9mrmz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-706500 describe pod metrics-server-57f55c9bc5-9mrmz: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-646133 -n no-preload-646133
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-10 23:04:07.719530112 +0000 UTC m=+5769.955959394
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-646133 -n no-preload-646133
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-646133 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-646133 logs -n 25: (2.248264302s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| stop    | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC | 10 Apr 24 22:40 UTC |
	| start   | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC | 10 Apr 24 22:40 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.1                      |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-646133             | no-preload-646133            | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC | 10 Apr 24 22:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-646133                                   | no-preload-646133            | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC | 10 Apr 24 22:41 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.1                      |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:41 UTC | 10 Apr 24 22:41 UTC |
	| start   | -p embed-certs-706500                                  | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:41 UTC | 10 Apr 24 22:42 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-706500            | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:42 UTC | 10 Apr 24 22:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-706500                                  | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-862528        | old-k8s-version-862528       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:42 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-646133                  | no-preload-646133            | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| delete  | -p cert-expiration-464519                              | cert-expiration-464519       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:43 UTC |
	| delete  | -p                                                     | disable-driver-mounts-676292 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:43 UTC |
	|         | disable-driver-mounts-676292                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:44 UTC |
	|         | default-k8s-diff-port-519831                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| start   | -p no-preload-646133                                   | no-preload-646133            | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:55 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.1                      |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-862528                              | old-k8s-version-862528       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-862528             | old-k8s-version-862528       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC | 10 Apr 24 22:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-862528                              | old-k8s-version-862528       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-519831  | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC | 10 Apr 24 22:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC |                     |
	|         | default-k8s-diff-port-519831                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-706500                 | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-706500                                  | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC | 10 Apr 24 22:54 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-519831       | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:46 UTC | 10 Apr 24 22:53 UTC |
	|         | default-k8s-diff-port-519831                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/10 22:46:47
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0410 22:46:47.395706   58701 out.go:291] Setting OutFile to fd 1 ...
	I0410 22:46:47.395991   58701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:46:47.396002   58701 out.go:304] Setting ErrFile to fd 2...
	I0410 22:46:47.396019   58701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:46:47.396208   58701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 22:46:47.396802   58701 out.go:298] Setting JSON to false
	I0410 22:46:47.397726   58701 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5350,"bootTime":1712783858,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0410 22:46:47.397786   58701 start.go:139] virtualization: kvm guest
	I0410 22:46:47.400191   58701 out.go:177] * [default-k8s-diff-port-519831] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0410 22:46:47.401578   58701 notify.go:220] Checking for updates...
	I0410 22:46:47.402880   58701 out.go:177]   - MINIKUBE_LOCATION=18610
	I0410 22:46:47.404311   58701 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0410 22:46:47.405790   58701 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:46:47.407012   58701 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 22:46:47.408130   58701 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0410 22:46:47.409497   58701 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0410 22:46:47.411183   58701 config.go:182] Loaded profile config "default-k8s-diff-port-519831": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:46:47.411591   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:46:47.411632   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:46:47.426322   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42887
	I0410 22:46:47.426759   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:46:47.427345   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:46:47.427366   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:46:47.427716   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:46:47.427926   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:46:47.428221   58701 driver.go:392] Setting default libvirt URI to qemu:///system
	I0410 22:46:47.428646   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:46:47.428696   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:46:47.444105   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40099
	I0410 22:46:47.444537   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:46:47.445035   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:46:47.445058   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:46:47.445398   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:46:47.445592   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:46:47.480451   58701 out.go:177] * Using the kvm2 driver based on existing profile
	I0410 22:46:47.481837   58701 start.go:297] selected driver: kvm2
	I0410 22:46:47.481852   58701 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-519831 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-519831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.170 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:46:47.481985   58701 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0410 22:46:47.482657   58701 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 22:46:47.482750   58701 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18610-5679/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0410 22:46:47.498330   58701 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0410 22:46:47.498668   58701 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 22:46:47.498735   58701 cni.go:84] Creating CNI manager for ""
	I0410 22:46:47.498748   58701 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:46:47.498784   58701 start.go:340] cluster config:
	{Name:default-k8s-diff-port-519831 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-519831 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.170 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:46:47.498877   58701 iso.go:125] acquiring lock: {Name:mk5a268a8a4dbf02629446a098c1de60574dce10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 22:46:47.500723   58701 out.go:177] * Starting "default-k8s-diff-port-519831" primary control-plane node in "default-k8s-diff-port-519831" cluster
	I0410 22:46:47.180678   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:46:47.501967   58701 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 22:46:47.502009   58701 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0410 22:46:47.502030   58701 cache.go:56] Caching tarball of preloaded images
	I0410 22:46:47.502108   58701 preload.go:173] Found /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0410 22:46:47.502118   58701 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0410 22:46:47.502202   58701 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/config.json ...
	I0410 22:46:47.502366   58701 start.go:360] acquireMachinesLock for default-k8s-diff-port-519831: {Name:mkcfcfa8b01b63726303cd37d33f01d452118635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0410 22:46:50.252732   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:46:56.332647   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:46:59.404660   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:05.484717   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:08.556632   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:14.636753   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:17.708788   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:23.788661   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:26.860683   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:32.940630   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:36.012689   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:42.092749   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:45.164706   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:51.244682   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:54.316652   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:48:00.396637   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:48:03.468672   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:48:06.472768   57719 start.go:364] duration metric: took 4m5.937893783s to acquireMachinesLock for "old-k8s-version-862528"
	I0410 22:48:06.472833   57719 start.go:96] Skipping create...Using existing machine configuration
	I0410 22:48:06.472852   57719 fix.go:54] fixHost starting: 
	I0410 22:48:06.473157   57719 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:48:06.473186   57719 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:48:06.488728   57719 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41927
	I0410 22:48:06.489157   57719 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:48:06.489590   57719 main.go:141] libmachine: Using API Version  1
	I0410 22:48:06.489612   57719 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:48:06.490011   57719 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:48:06.490171   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:06.490337   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetState
	I0410 22:48:06.491997   57719 fix.go:112] recreateIfNeeded on old-k8s-version-862528: state=Stopped err=<nil>
	I0410 22:48:06.492030   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	W0410 22:48:06.492234   57719 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 22:48:06.493891   57719 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-862528" ...
	I0410 22:48:06.469869   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:48:06.469904   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetMachineName
	I0410 22:48:06.470235   57270 buildroot.go:166] provisioning hostname "no-preload-646133"
	I0410 22:48:06.470261   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetMachineName
	I0410 22:48:06.470529   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:48:06.472589   57270 machine.go:97] duration metric: took 4m35.561692081s to provisionDockerMachine
	I0410 22:48:06.472636   57270 fix.go:56] duration metric: took 4m35.586484815s for fixHost
	I0410 22:48:06.472646   57270 start.go:83] releasing machines lock for "no-preload-646133", held for 4m35.586540892s
	W0410 22:48:06.472671   57270 start.go:713] error starting host: provision: host is not running
	W0410 22:48:06.472773   57270 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0410 22:48:06.472785   57270 start.go:728] Will try again in 5 seconds ...
	I0410 22:48:06.495233   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .Start
	I0410 22:48:06.495416   57719 main.go:141] libmachine: (old-k8s-version-862528) Ensuring networks are active...
	I0410 22:48:06.496254   57719 main.go:141] libmachine: (old-k8s-version-862528) Ensuring network default is active
	I0410 22:48:06.496589   57719 main.go:141] libmachine: (old-k8s-version-862528) Ensuring network mk-old-k8s-version-862528 is active
	I0410 22:48:06.497002   57719 main.go:141] libmachine: (old-k8s-version-862528) Getting domain xml...
	I0410 22:48:06.497751   57719 main.go:141] libmachine: (old-k8s-version-862528) Creating domain...
	I0410 22:48:07.722703   57719 main.go:141] libmachine: (old-k8s-version-862528) Waiting to get IP...
	I0410 22:48:07.723942   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:07.724373   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:07.724451   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:07.724338   59021 retry.go:31] will retry after 284.455366ms: waiting for machine to come up
	I0410 22:48:08.011077   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:08.011598   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:08.011628   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:08.011545   59021 retry.go:31] will retry after 337.946102ms: waiting for machine to come up
	I0410 22:48:08.351219   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:08.351725   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:08.351744   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:08.351691   59021 retry.go:31] will retry after 454.774669ms: waiting for machine to come up
	I0410 22:48:08.808516   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:08.808953   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:08.808991   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:08.808893   59021 retry.go:31] will retry after 484.667282ms: waiting for machine to come up
	I0410 22:48:09.295665   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:09.296127   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:09.296148   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:09.296083   59021 retry.go:31] will retry after 515.00238ms: waiting for machine to come up
	I0410 22:48:09.812855   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:09.813337   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:09.813362   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:09.813289   59021 retry.go:31] will retry after 596.67118ms: waiting for machine to come up
	I0410 22:48:10.411103   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:10.411616   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:10.411640   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:10.411568   59021 retry.go:31] will retry after 1.035822512s: waiting for machine to come up
	I0410 22:48:11.473748   57270 start.go:360] acquireMachinesLock for no-preload-646133: {Name:mkcfcfa8b01b63726303cd37d33f01d452118635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0410 22:48:11.448894   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:11.449358   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:11.449388   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:11.449315   59021 retry.go:31] will retry after 1.258446774s: waiting for machine to come up
	I0410 22:48:12.709048   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:12.709587   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:12.709618   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:12.709530   59021 retry.go:31] will retry after 1.149380432s: waiting for machine to come up
	I0410 22:48:13.860550   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:13.861084   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:13.861110   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:13.861028   59021 retry.go:31] will retry after 1.733388735s: waiting for machine to come up
	I0410 22:48:15.595870   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:15.596447   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:15.596487   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:15.596343   59021 retry.go:31] will retry after 2.536794123s: waiting for machine to come up
	I0410 22:48:18.135592   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:18.136099   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:18.136128   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:18.136056   59021 retry.go:31] will retry after 3.390395523s: waiting for machine to come up
	I0410 22:48:21.528518   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:21.528964   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:21.529008   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:21.528906   59021 retry.go:31] will retry after 4.165145769s: waiting for machine to come up
	I0410 22:48:26.977460   58186 start.go:364] duration metric: took 3m29.815175662s to acquireMachinesLock for "embed-certs-706500"
	I0410 22:48:26.977524   58186 start.go:96] Skipping create...Using existing machine configuration
	I0410 22:48:26.977532   58186 fix.go:54] fixHost starting: 
	I0410 22:48:26.977935   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:48:26.977965   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:48:26.994175   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36291
	I0410 22:48:26.994552   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:48:26.995016   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:48:26.995040   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:48:26.995447   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:48:26.995652   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:26.995826   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetState
	I0410 22:48:26.997547   58186 fix.go:112] recreateIfNeeded on embed-certs-706500: state=Stopped err=<nil>
	I0410 22:48:26.997580   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	W0410 22:48:26.997902   58186 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 22:48:27.000500   58186 out.go:177] * Restarting existing kvm2 VM for "embed-certs-706500" ...
	I0410 22:48:27.002204   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Start
	I0410 22:48:27.002398   58186 main.go:141] libmachine: (embed-certs-706500) Ensuring networks are active...
	I0410 22:48:27.003133   58186 main.go:141] libmachine: (embed-certs-706500) Ensuring network default is active
	I0410 22:48:27.003465   58186 main.go:141] libmachine: (embed-certs-706500) Ensuring network mk-embed-certs-706500 is active
	I0410 22:48:27.003863   58186 main.go:141] libmachine: (embed-certs-706500) Getting domain xml...
	I0410 22:48:27.004603   58186 main.go:141] libmachine: (embed-certs-706500) Creating domain...
	I0410 22:48:25.699595   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.700129   57719 main.go:141] libmachine: (old-k8s-version-862528) Found IP for machine: 192.168.61.178
	I0410 22:48:25.700159   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has current primary IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.700166   57719 main.go:141] libmachine: (old-k8s-version-862528) Reserving static IP address...
	I0410 22:48:25.700654   57719 main.go:141] libmachine: (old-k8s-version-862528) Reserved static IP address: 192.168.61.178
	I0410 22:48:25.700676   57719 main.go:141] libmachine: (old-k8s-version-862528) Waiting for SSH to be available...
	I0410 22:48:25.700704   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "old-k8s-version-862528", mac: "52:54:00:d0:b7:c9", ip: "192.168.61.178"} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.700732   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | skip adding static IP to network mk-old-k8s-version-862528 - found existing host DHCP lease matching {name: "old-k8s-version-862528", mac: "52:54:00:d0:b7:c9", ip: "192.168.61.178"}
	I0410 22:48:25.700745   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | Getting to WaitForSSH function...
	I0410 22:48:25.702929   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.703290   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.703322   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.703490   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | Using SSH client type: external
	I0410 22:48:25.703519   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | Using SSH private key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa (-rw-------)
	I0410 22:48:25.703551   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.178 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0410 22:48:25.703590   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | About to run SSH command:
	I0410 22:48:25.703635   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | exit 0
	I0410 22:48:25.832738   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | SSH cmd err, output: <nil>: 
	I0410 22:48:25.833133   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetConfigRaw
	I0410 22:48:25.833784   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetIP
	I0410 22:48:25.836323   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.836874   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.836908   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.837156   57719 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/config.json ...
	I0410 22:48:25.837472   57719 machine.go:94] provisionDockerMachine start ...
	I0410 22:48:25.837502   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:25.837710   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:25.840159   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.840488   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.840516   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.840593   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:25.840815   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:25.840992   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:25.841134   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:25.841337   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:25.841543   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:25.841556   57719 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 22:48:25.957153   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0410 22:48:25.957189   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetMachineName
	I0410 22:48:25.957438   57719 buildroot.go:166] provisioning hostname "old-k8s-version-862528"
	I0410 22:48:25.957461   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetMachineName
	I0410 22:48:25.957679   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:25.960779   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.961149   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.961184   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.961332   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:25.961546   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:25.961689   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:25.961864   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:25.962020   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:25.962196   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:25.962207   57719 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-862528 && echo "old-k8s-version-862528" | sudo tee /etc/hostname
	I0410 22:48:26.087073   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-862528
	
	I0410 22:48:26.087099   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.089770   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.090109   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.090140   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.090261   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.090446   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.090623   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.090760   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.090951   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:26.091131   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:26.091155   57719 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-862528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-862528/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-862528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:48:26.214422   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:48:26.214462   57719 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:48:26.214490   57719 buildroot.go:174] setting up certificates
	I0410 22:48:26.214498   57719 provision.go:84] configureAuth start
	I0410 22:48:26.214509   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetMachineName
	I0410 22:48:26.214793   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetIP
	I0410 22:48:26.217463   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.217809   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.217850   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.217975   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.219971   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.220235   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.220265   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.220480   57719 provision.go:143] copyHostCerts
	I0410 22:48:26.220526   57719 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:48:26.220542   57719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:48:26.220604   57719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:48:26.220703   57719 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:48:26.220712   57719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:48:26.220736   57719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:48:26.220789   57719 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:48:26.220796   57719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:48:26.220817   57719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:48:26.220864   57719 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-862528 san=[127.0.0.1 192.168.61.178 localhost minikube old-k8s-version-862528]
	I0410 22:48:26.288372   57719 provision.go:177] copyRemoteCerts
	I0410 22:48:26.288445   57719 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:48:26.288468   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.290980   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.291298   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.291339   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.291444   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.291635   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.291809   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.291927   57719 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa Username:docker}
	I0410 22:48:26.379823   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:48:26.405285   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0410 22:48:26.430122   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0410 22:48:26.456124   57719 provision.go:87] duration metric: took 241.614364ms to configureAuth
	I0410 22:48:26.456154   57719 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:48:26.456356   57719 config.go:182] Loaded profile config "old-k8s-version-862528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0410 22:48:26.456480   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.459028   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.459335   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.459366   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.459558   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.459742   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.459888   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.460037   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.460211   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:26.460379   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:26.460413   57719 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:48:26.732588   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:48:26.732614   57719 machine.go:97] duration metric: took 895.122467ms to provisionDockerMachine
	I0410 22:48:26.732627   57719 start.go:293] postStartSetup for "old-k8s-version-862528" (driver="kvm2")
	I0410 22:48:26.732641   57719 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:48:26.732679   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.733014   57719 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:48:26.733044   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.735820   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.736217   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.736244   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.736418   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.736630   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.736840   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.737020   57719 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa Username:docker}
	I0410 22:48:26.823452   57719 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:48:26.827806   57719 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:48:26.827827   57719 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:48:26.827899   57719 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:48:26.828009   57719 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:48:26.828122   57719 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:48:26.837564   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:48:26.862278   57719 start.go:296] duration metric: took 129.638185ms for postStartSetup
	I0410 22:48:26.862325   57719 fix.go:56] duration metric: took 20.389482643s for fixHost
	I0410 22:48:26.862346   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.864911   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.865277   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.865301   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.865419   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.865597   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.865742   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.865872   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.866083   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:26.866283   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:26.866300   57719 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 22:48:26.977317   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712789306.948982315
	
	I0410 22:48:26.977337   57719 fix.go:216] guest clock: 1712789306.948982315
	I0410 22:48:26.977344   57719 fix.go:229] Guest: 2024-04-10 22:48:26.948982315 +0000 UTC Remote: 2024-04-10 22:48:26.862329953 +0000 UTC m=+266.486936912 (delta=86.652362ms)
	I0410 22:48:26.977362   57719 fix.go:200] guest clock delta is within tolerance: 86.652362ms
	I0410 22:48:26.977366   57719 start.go:83] releasing machines lock for "old-k8s-version-862528", held for 20.504554043s
	I0410 22:48:26.977386   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.977653   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetIP
	I0410 22:48:26.980035   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.980376   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.980419   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.980602   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.981224   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.981421   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.981516   57719 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:48:26.981558   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.981645   57719 ssh_runner.go:195] Run: cat /version.json
	I0410 22:48:26.981670   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.984375   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.984568   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.984840   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.984868   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.984953   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.985030   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.985079   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.985118   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.985236   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.985277   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.985374   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.985450   57719 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa Username:docker}
	I0410 22:48:26.985516   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.985635   57719 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa Username:docker}
	I0410 22:48:27.105002   57719 ssh_runner.go:195] Run: systemctl --version
	I0410 22:48:27.111205   57719 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:48:27.261678   57719 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 22:48:27.268336   57719 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:48:27.268423   57719 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:48:27.290099   57719 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0410 22:48:27.290122   57719 start.go:494] detecting cgroup driver to use...
	I0410 22:48:27.290174   57719 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:48:27.308787   57719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:48:27.325557   57719 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:48:27.325611   57719 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:48:27.340859   57719 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:48:27.355570   57719 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:48:27.479670   57719 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:48:27.653364   57719 docker.go:233] disabling docker service ...
	I0410 22:48:27.653424   57719 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:48:27.669775   57719 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:48:27.683654   57719 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:48:27.813212   57719 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:48:27.929620   57719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:48:27.946085   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:48:27.966341   57719 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0410 22:48:27.966404   57719 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:27.978022   57719 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:48:27.978111   57719 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:27.989324   57719 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:28.001429   57719 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:28.012965   57719 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:48:28.024663   57719 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:48:28.034362   57719 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0410 22:48:28.034423   57719 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0410 22:48:28.048740   57719 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:48:28.060698   57719 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:48:28.188526   57719 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:48:28.348442   57719 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 22:48:28.348523   57719 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 22:48:28.353501   57719 start.go:562] Will wait 60s for crictl version
	I0410 22:48:28.353566   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:28.357486   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 22:48:28.391138   57719 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 22:48:28.391221   57719 ssh_runner.go:195] Run: crio --version
	I0410 22:48:28.421399   57719 ssh_runner.go:195] Run: crio --version
	I0410 22:48:28.455851   57719 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0410 22:48:28.457534   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetIP
	I0410 22:48:28.460913   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:28.461297   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:28.461323   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:28.461558   57719 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0410 22:48:28.466450   57719 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:48:28.480549   57719 kubeadm.go:877] updating cluster {Name:old-k8s-version-862528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-862528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.178 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 22:48:28.480671   57719 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0410 22:48:28.480775   57719 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:48:28.536971   57719 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0410 22:48:28.537034   57719 ssh_runner.go:195] Run: which lz4
	I0410 22:48:28.541757   57719 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0410 22:48:28.546381   57719 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0410 22:48:28.546413   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0410 22:48:30.411805   57719 crio.go:462] duration metric: took 1.870076139s to copy over tarball
	I0410 22:48:30.411900   57719 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0410 22:48:28.229217   58186 main.go:141] libmachine: (embed-certs-706500) Waiting to get IP...
	I0410 22:48:28.230257   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:28.230673   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:28.230724   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:28.230643   59155 retry.go:31] will retry after 262.296498ms: waiting for machine to come up
	I0410 22:48:28.494117   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:28.494631   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:28.494660   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:28.494584   59155 retry.go:31] will retry after 237.287095ms: waiting for machine to come up
	I0410 22:48:28.733250   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:28.733795   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:28.733817   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:28.733755   59155 retry.go:31] will retry after 387.436239ms: waiting for machine to come up
	I0410 22:48:29.123585   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:29.124128   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:29.124163   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:29.124073   59155 retry.go:31] will retry after 428.418916ms: waiting for machine to come up
	I0410 22:48:29.554781   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:29.555244   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:29.555285   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:29.555235   59155 retry.go:31] will retry after 683.194159ms: waiting for machine to come up
	I0410 22:48:30.239955   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:30.240385   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:30.240463   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:30.240365   59155 retry.go:31] will retry after 764.240086ms: waiting for machine to come up
	I0410 22:48:31.006294   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:31.006789   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:31.006816   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:31.006750   59155 retry.go:31] will retry after 1.113674235s: waiting for machine to come up
	I0410 22:48:33.358026   57719 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.946092727s)
	I0410 22:48:33.358059   57719 crio.go:469] duration metric: took 2.946222933s to extract the tarball
	I0410 22:48:33.358069   57719 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0410 22:48:33.402924   57719 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:48:33.441006   57719 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0410 22:48:33.441033   57719 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0410 22:48:33.441090   57719 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:48:33.441142   57719 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.441203   57719 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.441210   57719 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.441318   57719 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0410 22:48:33.441339   57719 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.441375   57719 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.441395   57719 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.442645   57719 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.442667   57719 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.442706   57719 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.442717   57719 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0410 22:48:33.442796   57719 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.442807   57719 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.442814   57719 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:48:33.442866   57719 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.651119   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.652634   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.665548   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.669396   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.672510   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.674137   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0410 22:48:33.686915   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.756592   57719 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0410 22:48:33.756639   57719 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.756696   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.756696   57719 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0410 22:48:33.756789   57719 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.756810   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867043   57719 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0410 22:48:33.867061   57719 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0410 22:48:33.867090   57719 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.867091   57719 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.867135   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867166   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867185   57719 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0410 22:48:33.867220   57719 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.867252   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867261   57719 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0410 22:48:33.867303   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.867311   57719 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0410 22:48:33.867355   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867359   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.867286   57719 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0410 22:48:33.867452   57719 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.867481   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.871719   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.881086   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.964827   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.964854   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0410 22:48:33.964932   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0410 22:48:33.964948   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.976084   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0410 22:48:33.976155   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0410 22:48:33.976205   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0410 22:48:34.011460   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0410 22:48:34.038739   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0410 22:48:34.038739   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0410 22:48:34.289751   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:48:34.429542   57719 cache_images.go:92] duration metric: took 988.487885ms to LoadCachedImages
	W0410 22:48:34.429636   57719 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0410 22:48:34.429665   57719 kubeadm.go:928] updating node { 192.168.61.178 8443 v1.20.0 crio true true} ...
	I0410 22:48:34.429782   57719 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-862528 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.178
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-862528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 22:48:34.429870   57719 ssh_runner.go:195] Run: crio config
	I0410 22:48:34.478794   57719 cni.go:84] Creating CNI manager for ""
	I0410 22:48:34.478829   57719 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:48:34.478845   57719 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 22:48:34.478868   57719 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.178 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-862528 NodeName:old-k8s-version-862528 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.178"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.178 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0410 22:48:34.479065   57719 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.178
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-862528"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.178
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.178"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 22:48:34.479147   57719 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0410 22:48:34.489950   57719 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 22:48:34.490007   57719 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 22:48:34.500261   57719 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0410 22:48:34.517530   57719 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0410 22:48:34.534814   57719 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0410 22:48:34.552669   57719 ssh_runner.go:195] Run: grep 192.168.61.178	control-plane.minikube.internal$ /etc/hosts
	I0410 22:48:34.556612   57719 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.178	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:48:34.569643   57719 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:48:34.700791   57719 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:48:34.719682   57719 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528 for IP: 192.168.61.178
	I0410 22:48:34.719703   57719 certs.go:194] generating shared ca certs ...
	I0410 22:48:34.719722   57719 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:48:34.719900   57719 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 22:48:34.719951   57719 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 22:48:34.719965   57719 certs.go:256] generating profile certs ...
	I0410 22:48:34.720091   57719 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/client.key
	I0410 22:48:34.720155   57719 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.key.a46c310c
	I0410 22:48:34.720199   57719 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/proxy-client.key
	I0410 22:48:34.720337   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 22:48:34.720376   57719 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 22:48:34.720386   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 22:48:34.720438   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 22:48:34.720472   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 22:48:34.720502   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 22:48:34.720557   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:48:34.721238   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 22:48:34.769810   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 22:48:34.805397   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 22:48:34.846743   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 22:48:34.888720   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0410 22:48:34.915958   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0410 22:48:34.962182   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 22:48:34.992444   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0410 22:48:35.023525   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 22:48:35.051098   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 22:48:35.077305   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 22:48:35.102172   57719 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 22:48:35.121381   57719 ssh_runner.go:195] Run: openssl version
	I0410 22:48:35.127869   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 22:48:35.140056   57719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:35.145172   57719 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:35.145242   57719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:35.152081   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 22:48:35.164621   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 22:48:35.176511   57719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 22:48:35.182164   57719 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:48:35.182217   57719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 22:48:35.188968   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 22:48:35.201491   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 22:48:35.213468   57719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 22:48:35.218519   57719 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:48:35.218586   57719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 22:48:35.224872   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 22:48:35.236964   57719 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:48:35.242262   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0410 22:48:35.249245   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0410 22:48:35.256301   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0410 22:48:35.263359   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0410 22:48:35.270166   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0410 22:48:35.276953   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0410 22:48:35.283529   57719 kubeadm.go:391] StartCluster: {Name:old-k8s-version-862528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-862528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.178 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:48:35.283643   57719 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 22:48:35.283700   57719 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:48:35.328461   57719 cri.go:89] found id: ""
	I0410 22:48:35.328532   57719 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0410 22:48:35.340207   57719 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0410 22:48:35.340235   57719 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0410 22:48:35.340245   57719 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0410 22:48:35.340293   57719 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0410 22:48:35.351212   57719 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0410 22:48:35.352189   57719 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-862528" does not appear in /home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:48:35.352989   57719 kubeconfig.go:62] /home/jenkins/minikube-integration/18610-5679/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-862528" cluster setting kubeconfig missing "old-k8s-version-862528" context setting]
	I0410 22:48:35.353956   57719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/kubeconfig: {Name:mkd6f498bec10d7b0d2291f83f5e27766227bc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:48:32.122313   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:32.122773   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:32.122816   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:32.122717   59155 retry.go:31] will retry after 1.052378413s: waiting for machine to come up
	I0410 22:48:33.176207   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:33.176621   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:33.176665   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:33.176568   59155 retry.go:31] will retry after 1.548572633s: waiting for machine to come up
	I0410 22:48:34.726554   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:34.726992   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:34.727020   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:34.726938   59155 retry.go:31] will retry after 1.800911659s: waiting for machine to come up
	I0410 22:48:36.529629   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:36.530133   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:36.530164   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:36.530085   59155 retry.go:31] will retry after 2.434743044s: waiting for machine to come up
	I0410 22:48:35.428830   57719 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0410 22:48:35.479813   57719 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.178
	I0410 22:48:35.479853   57719 kubeadm.go:1154] stopping kube-system containers ...
	I0410 22:48:35.479882   57719 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0410 22:48:35.479940   57719 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:48:35.520506   57719 cri.go:89] found id: ""
	I0410 22:48:35.520577   57719 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0410 22:48:35.538167   57719 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:48:35.548571   57719 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:48:35.548600   57719 kubeadm.go:156] found existing configuration files:
	
	I0410 22:48:35.548662   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:48:35.558559   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:48:35.558612   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:48:35.568950   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:48:35.578644   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:48:35.578712   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:48:35.589075   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:48:35.600265   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:48:35.600321   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:48:35.611459   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:48:35.621712   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:48:35.621785   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:48:35.632133   57719 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:48:35.643494   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:35.775309   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:37.133286   57719 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.35793645s)
	I0410 22:48:37.133334   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:37.368687   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:37.497136   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:37.584652   57719 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:48:37.584744   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:38.085293   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:38.585489   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:39.084919   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:39.584951   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:40.085144   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:38.966866   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:38.967360   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:38.967383   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:38.967339   59155 retry.go:31] will retry after 3.219302627s: waiting for machine to come up
	I0410 22:48:40.585356   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:41.084839   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:41.585434   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:42.085797   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:42.585578   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:43.085621   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:43.585581   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:44.085251   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:44.584785   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:45.085394   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:46.409467   58701 start.go:364] duration metric: took 1m58.907071516s to acquireMachinesLock for "default-k8s-diff-port-519831"
	I0410 22:48:46.409536   58701 start.go:96] Skipping create...Using existing machine configuration
	I0410 22:48:46.409557   58701 fix.go:54] fixHost starting: 
	I0410 22:48:46.410030   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:48:46.410080   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:48:46.427877   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35587
	I0410 22:48:46.428357   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:48:46.428836   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:48:46.428858   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:48:46.429163   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:48:46.429354   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:48:46.429494   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetState
	I0410 22:48:46.431151   58701 fix.go:112] recreateIfNeeded on default-k8s-diff-port-519831: state=Stopped err=<nil>
	I0410 22:48:46.431192   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	W0410 22:48:46.431372   58701 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 22:48:46.433597   58701 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-519831" ...
	I0410 22:48:42.187835   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:42.188266   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:42.188305   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:42.188191   59155 retry.go:31] will retry after 2.924293511s: waiting for machine to come up
	I0410 22:48:45.113669   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.114211   58186 main.go:141] libmachine: (embed-certs-706500) Found IP for machine: 192.168.39.10
	I0410 22:48:45.114229   58186 main.go:141] libmachine: (embed-certs-706500) Reserving static IP address...
	I0410 22:48:45.114243   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has current primary IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.114685   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "embed-certs-706500", mac: "52:54:00:36:c4:8c", ip: "192.168.39.10"} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.114711   58186 main.go:141] libmachine: (embed-certs-706500) DBG | skip adding static IP to network mk-embed-certs-706500 - found existing host DHCP lease matching {name: "embed-certs-706500", mac: "52:54:00:36:c4:8c", ip: "192.168.39.10"}
	I0410 22:48:45.114721   58186 main.go:141] libmachine: (embed-certs-706500) Reserved static IP address: 192.168.39.10
	I0410 22:48:45.114728   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Getting to WaitForSSH function...
	I0410 22:48:45.114743   58186 main.go:141] libmachine: (embed-certs-706500) Waiting for SSH to be available...
	I0410 22:48:45.116708   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.116963   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.117007   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.117139   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Using SSH client type: external
	I0410 22:48:45.117167   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Using SSH private key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa (-rw-------)
	I0410 22:48:45.117198   58186 main.go:141] libmachine: (embed-certs-706500) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0410 22:48:45.117224   58186 main.go:141] libmachine: (embed-certs-706500) DBG | About to run SSH command:
	I0410 22:48:45.117236   58186 main.go:141] libmachine: (embed-certs-706500) DBG | exit 0
	I0410 22:48:45.240518   58186 main.go:141] libmachine: (embed-certs-706500) DBG | SSH cmd err, output: <nil>: 
	I0410 22:48:45.240843   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetConfigRaw
	I0410 22:48:45.241532   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetIP
	I0410 22:48:45.243908   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.244293   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.244317   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.244576   58186 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/config.json ...
	I0410 22:48:45.244775   58186 machine.go:94] provisionDockerMachine start ...
	I0410 22:48:45.244799   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:45.245004   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.247248   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.247639   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.247665   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.247859   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:45.248039   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.248217   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.248375   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:45.248543   58186 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:45.248746   58186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0410 22:48:45.248766   58186 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 22:48:45.357146   58186 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0410 22:48:45.357177   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetMachineName
	I0410 22:48:45.357428   58186 buildroot.go:166] provisioning hostname "embed-certs-706500"
	I0410 22:48:45.357447   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetMachineName
	I0410 22:48:45.357624   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.360299   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.360700   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.360796   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.360838   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:45.361049   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.361183   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.361367   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:45.361537   58186 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:45.361702   58186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0410 22:48:45.361716   58186 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-706500 && echo "embed-certs-706500" | sudo tee /etc/hostname
	I0410 22:48:45.487121   58186 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-706500
	
	I0410 22:48:45.487160   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.490242   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.490597   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.490625   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.490805   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:45.491004   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.491204   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.491359   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:45.491576   58186 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:45.491792   58186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0410 22:48:45.491824   58186 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-706500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-706500/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-706500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:48:45.606186   58186 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:48:45.606212   58186 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:48:45.606246   58186 buildroot.go:174] setting up certificates
	I0410 22:48:45.606257   58186 provision.go:84] configureAuth start
	I0410 22:48:45.606269   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetMachineName
	I0410 22:48:45.606594   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetIP
	I0410 22:48:45.609459   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.609893   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.609932   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.610134   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.612631   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.612945   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.612979   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.613144   58186 provision.go:143] copyHostCerts
	I0410 22:48:45.613193   58186 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:48:45.613207   58186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:48:45.613262   58186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:48:45.613378   58186 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:48:45.613393   58186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:48:45.613427   58186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:48:45.613495   58186 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:48:45.613505   58186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:48:45.613529   58186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:48:45.613592   58186 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.embed-certs-706500 san=[127.0.0.1 192.168.39.10 embed-certs-706500 localhost minikube]
	I0410 22:48:45.737049   58186 provision.go:177] copyRemoteCerts
	I0410 22:48:45.737105   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:48:45.737129   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.739712   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.740060   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.740089   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.740347   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:45.740589   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.740763   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:45.740957   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:48:45.828677   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:48:45.854080   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0410 22:48:45.878704   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0410 22:48:45.902611   58186 provision.go:87] duration metric: took 296.343353ms to configureAuth
	I0410 22:48:45.902640   58186 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:48:45.902879   58186 config.go:182] Loaded profile config "embed-certs-706500": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:48:45.902962   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.905588   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.905950   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.905972   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.906165   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:45.906360   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.906473   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.906561   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:45.906725   58186 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:45.906887   58186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0410 22:48:45.906911   58186 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:48:46.172772   58186 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:48:46.172807   58186 machine.go:97] duration metric: took 928.014662ms to provisionDockerMachine
	I0410 22:48:46.172823   58186 start.go:293] postStartSetup for "embed-certs-706500" (driver="kvm2")
	I0410 22:48:46.172836   58186 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:48:46.172877   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:46.173197   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:48:46.173223   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:46.176113   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.176465   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:46.176495   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.176679   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:46.176896   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:46.177118   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:46.177328   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:48:46.260470   58186 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:48:46.265003   58186 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:48:46.265030   58186 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:48:46.265088   58186 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:48:46.265158   58186 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:48:46.265241   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:48:46.274931   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:48:46.300036   58186 start.go:296] duration metric: took 127.199834ms for postStartSetup
	I0410 22:48:46.300082   58186 fix.go:56] duration metric: took 19.322550114s for fixHost
	I0410 22:48:46.300108   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:46.302945   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.303252   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:46.303279   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.303479   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:46.303700   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:46.303861   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:46.303990   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:46.304140   58186 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:46.304308   58186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0410 22:48:46.304318   58186 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 22:48:46.409294   58186 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712789326.385898055
	
	I0410 22:48:46.409317   58186 fix.go:216] guest clock: 1712789326.385898055
	I0410 22:48:46.409327   58186 fix.go:229] Guest: 2024-04-10 22:48:46.385898055 +0000 UTC Remote: 2024-04-10 22:48:46.300087658 +0000 UTC m=+229.287947250 (delta=85.810397ms)
	I0410 22:48:46.409352   58186 fix.go:200] guest clock delta is within tolerance: 85.810397ms
	I0410 22:48:46.409360   58186 start.go:83] releasing machines lock for "embed-certs-706500", held for 19.431860062s
	I0410 22:48:46.409389   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:46.409752   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetIP
	I0410 22:48:46.412201   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.412616   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:46.412651   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.412790   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:46.413361   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:46.413559   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:46.413617   58186 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:48:46.413665   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:46.413796   58186 ssh_runner.go:195] Run: cat /version.json
	I0410 22:48:46.413831   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:46.416879   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.417224   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:46.417248   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.417268   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.417428   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:46.417630   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:46.417811   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:46.417835   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:46.417858   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.417938   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:48:46.418030   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:46.418154   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:46.418284   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:46.418463   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:48:46.529204   58186 ssh_runner.go:195] Run: systemctl --version
	I0410 22:48:46.535396   58186 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:48:46.681100   58186 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 22:48:46.687278   58186 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:48:46.687340   58186 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:48:46.703105   58186 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0410 22:48:46.703128   58186 start.go:494] detecting cgroup driver to use...
	I0410 22:48:46.703191   58186 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:48:46.719207   58186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:48:46.733444   58186 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:48:46.733509   58186 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:48:46.747369   58186 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:48:46.762231   58186 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:48:46.874897   58186 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:48:47.023672   58186 docker.go:233] disabling docker service ...
	I0410 22:48:47.023749   58186 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:48:47.038963   58186 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:48:47.053827   58186 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:48:46.435268   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Start
	I0410 22:48:46.435498   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Ensuring networks are active...
	I0410 22:48:46.436266   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Ensuring network default is active
	I0410 22:48:46.436691   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Ensuring network mk-default-k8s-diff-port-519831 is active
	I0410 22:48:46.437163   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Getting domain xml...
	I0410 22:48:46.437799   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Creating domain...
	I0410 22:48:47.206641   58186 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:48:47.363331   58186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:48:47.380657   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:48:47.402234   58186 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0410 22:48:47.402306   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.419356   58186 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:48:47.419417   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.435320   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.450812   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.462588   58186 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:48:47.474323   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.494156   58186 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.515195   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.526148   58186 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:48:47.536045   58186 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0410 22:48:47.536106   58186 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0410 22:48:47.549556   58186 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:48:47.567236   58186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:48:47.702628   58186 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:48:47.848908   58186 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 22:48:47.849000   58186 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 22:48:47.854126   58186 start.go:562] Will wait 60s for crictl version
	I0410 22:48:47.854191   58186 ssh_runner.go:195] Run: which crictl
	I0410 22:48:47.858095   58186 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 22:48:47.897714   58186 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 22:48:47.897805   58186 ssh_runner.go:195] Run: crio --version
	I0410 22:48:47.927597   58186 ssh_runner.go:195] Run: crio --version
	I0410 22:48:47.958357   58186 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0410 22:48:45.584769   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:46.085396   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:46.585857   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:47.085186   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:47.585668   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:48.085585   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:48.585617   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:49.085227   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:49.585626   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:50.084900   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:47.959811   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetIP
	I0410 22:48:47.962805   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:47.963246   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:47.963276   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:47.963510   58186 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0410 22:48:47.967753   58186 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:48:47.981154   58186 kubeadm.go:877] updating cluster {Name:embed-certs-706500 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-706500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 22:48:47.981258   58186 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 22:48:47.981298   58186 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:48:48.018208   58186 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0410 22:48:48.018274   58186 ssh_runner.go:195] Run: which lz4
	I0410 22:48:48.023613   58186 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0410 22:48:48.029036   58186 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0410 22:48:48.029063   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0410 22:48:49.637729   58186 crio.go:462] duration metric: took 1.61414003s to copy over tarball
	I0410 22:48:49.637796   58186 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0410 22:48:52.046454   58186 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.408634496s)
	I0410 22:48:52.046482   58186 crio.go:469] duration metric: took 2.408728343s to extract the tarball
	I0410 22:48:52.046489   58186 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0410 22:48:47.701355   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting to get IP...
	I0410 22:48:47.702406   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:47.702994   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:47.703067   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:47.702962   59362 retry.go:31] will retry after 292.834608ms: waiting for machine to come up
	I0410 22:48:47.997294   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:47.997757   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:47.997785   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:47.997701   59362 retry.go:31] will retry after 341.35168ms: waiting for machine to come up
	I0410 22:48:48.340842   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:48.341347   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:48.341379   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:48.341279   59362 retry.go:31] will retry after 438.041848ms: waiting for machine to come up
	I0410 22:48:48.780565   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:48.781092   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:48.781116   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:48.781038   59362 retry.go:31] will retry after 557.770882ms: waiting for machine to come up
	I0410 22:48:49.340858   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:49.341330   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:49.341354   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:49.341282   59362 retry.go:31] will retry after 637.316206ms: waiting for machine to come up
	I0410 22:48:49.980256   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:49.980737   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:49.980761   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:49.980696   59362 retry.go:31] will retry after 909.873955ms: waiting for machine to come up
	I0410 22:48:50.891776   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:50.892197   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:50.892229   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:50.892147   59362 retry.go:31] will retry after 745.06949ms: waiting for machine to come up
	I0410 22:48:51.638436   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:51.638907   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:51.638933   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:51.638854   59362 retry.go:31] will retry after 1.060037191s: waiting for machine to come up
	I0410 22:48:50.585691   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:51.085669   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:51.585308   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:52.085393   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:52.585619   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:53.085643   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:53.585076   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:54.085251   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:54.585027   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:55.085629   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:52.087135   58186 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:48:52.139368   58186 crio.go:514] all images are preloaded for cri-o runtime.
	I0410 22:48:52.139389   58186 cache_images.go:84] Images are preloaded, skipping loading
	I0410 22:48:52.139397   58186 kubeadm.go:928] updating node { 192.168.39.10 8443 v1.29.3 crio true true} ...
	I0410 22:48:52.139535   58186 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-706500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-706500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 22:48:52.139629   58186 ssh_runner.go:195] Run: crio config
	I0410 22:48:52.193347   58186 cni.go:84] Creating CNI manager for ""
	I0410 22:48:52.193375   58186 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:48:52.193390   58186 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 22:48:52.193429   58186 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-706500 NodeName:embed-certs-706500 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0410 22:48:52.193606   58186 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-706500"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 22:48:52.193686   58186 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0410 22:48:52.206450   58186 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 22:48:52.206507   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 22:48:52.218898   58186 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0410 22:48:52.239285   58186 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0410 22:48:52.257083   58186 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0410 22:48:52.275448   58186 ssh_runner.go:195] Run: grep 192.168.39.10	control-plane.minikube.internal$ /etc/hosts
	I0410 22:48:52.279486   58186 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:48:52.293308   58186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:48:52.428424   58186 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:48:52.446713   58186 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500 for IP: 192.168.39.10
	I0410 22:48:52.446738   58186 certs.go:194] generating shared ca certs ...
	I0410 22:48:52.446759   58186 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:48:52.446937   58186 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 22:48:52.446980   58186 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 22:48:52.446990   58186 certs.go:256] generating profile certs ...
	I0410 22:48:52.447059   58186 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/client.key
	I0410 22:48:52.447124   58186 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/apiserver.key.f3045f1a
	I0410 22:48:52.447156   58186 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/proxy-client.key
	I0410 22:48:52.447294   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 22:48:52.447328   58186 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 22:48:52.447335   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 22:48:52.447354   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 22:48:52.447374   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 22:48:52.447405   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 22:48:52.447457   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:48:52.448166   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 22:48:52.481862   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 22:48:52.530983   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 22:48:52.572191   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 22:48:52.614466   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0410 22:48:52.644331   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0410 22:48:52.672811   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 22:48:52.698376   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0410 22:48:52.723998   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 22:48:52.749405   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 22:48:52.777529   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 22:48:52.803663   58186 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 22:48:52.822234   58186 ssh_runner.go:195] Run: openssl version
	I0410 22:48:52.830835   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 22:48:52.843425   58186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 22:48:52.848384   58186 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:48:52.848444   58186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 22:48:52.854869   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 22:48:52.867228   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 22:48:52.879319   58186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 22:48:52.884241   58186 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:48:52.884324   58186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 22:48:52.890349   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 22:48:52.902398   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 22:48:52.913996   58186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:52.918757   58186 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:52.918824   58186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:52.924669   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 22:48:52.936581   58186 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:48:52.941242   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0410 22:48:52.947526   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0410 22:48:52.953939   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0410 22:48:52.960447   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0410 22:48:52.966829   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0410 22:48:52.973148   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0410 22:48:52.979557   58186 kubeadm.go:391] StartCluster: {Name:embed-certs-706500 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-706500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:48:52.979669   58186 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 22:48:52.979744   58186 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:48:53.018394   58186 cri.go:89] found id: ""
	I0410 22:48:53.018479   58186 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0410 22:48:53.030088   58186 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0410 22:48:53.030112   58186 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0410 22:48:53.030118   58186 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0410 22:48:53.030184   58186 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0410 22:48:53.041035   58186 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0410 22:48:53.042312   58186 kubeconfig.go:125] found "embed-certs-706500" server: "https://192.168.39.10:8443"
	I0410 22:48:53.044306   58186 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0410 22:48:53.054911   58186 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.10
	I0410 22:48:53.054948   58186 kubeadm.go:1154] stopping kube-system containers ...
	I0410 22:48:53.054974   58186 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0410 22:48:53.055020   58186 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:48:53.093035   58186 cri.go:89] found id: ""
	I0410 22:48:53.093109   58186 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0410 22:48:53.111257   58186 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:48:53.122098   58186 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:48:53.122125   58186 kubeadm.go:156] found existing configuration files:
	
	I0410 22:48:53.122176   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:48:53.133513   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:48:53.133587   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:48:53.144275   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:48:53.154921   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:48:53.155000   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:48:53.165604   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:48:53.175520   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:48:53.175582   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:48:53.186094   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:48:53.196086   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:48:53.196156   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:48:53.206564   58186 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:48:53.217180   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:53.336883   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:54.151708   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:54.367165   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:54.457694   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:54.572579   58186 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:48:54.572693   58186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:55.073196   58186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:55.572865   58186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:55.595374   58186 api_server.go:72] duration metric: took 1.022777759s to wait for apiserver process to appear ...
	I0410 22:48:55.595403   58186 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:48:55.595424   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:48:52.701137   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:52.701574   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:52.701606   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:52.701529   59362 retry.go:31] will retry after 1.792719263s: waiting for machine to come up
	I0410 22:48:54.496380   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:54.496793   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:54.496823   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:54.496740   59362 retry.go:31] will retry after 2.321115222s: waiting for machine to come up
	I0410 22:48:56.819654   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:56.820107   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:56.820140   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:56.820072   59362 retry.go:31] will retry after 2.57309135s: waiting for machine to come up
	I0410 22:48:55.585506   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:56.084892   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:56.585876   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:57.085775   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:57.585260   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:58.085635   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:58.585588   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:59.085661   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:59.585663   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:00.085635   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:58.843447   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0410 22:48:58.843487   58186 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0410 22:48:58.843504   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:48:58.962381   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:48:58.962431   58186 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:48:59.095611   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:48:59.100754   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:48:59.100781   58186 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:48:59.595968   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:48:59.606936   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:48:59.606977   58186 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:49:00.096182   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:49:00.106346   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:49:00.106388   58186 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:49:00.595923   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:49:00.600197   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I0410 22:49:00.609220   58186 api_server.go:141] control plane version: v1.29.3
	I0410 22:49:00.609246   58186 api_server.go:131] duration metric: took 5.013835577s to wait for apiserver health ...
	I0410 22:49:00.609256   58186 cni.go:84] Creating CNI manager for ""
	I0410 22:49:00.609263   58186 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:49:00.611220   58186 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0410 22:49:00.612765   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0410 22:49:00.625567   58186 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0410 22:49:00.648581   58186 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:49:00.657652   58186 system_pods.go:59] 8 kube-system pods found
	I0410 22:49:00.657688   58186 system_pods.go:61] "coredns-76f75df574-j4kj8" [1986e6b6-e6c7-4212-bdd5-10360a0b897c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0410 22:49:00.657696   58186 system_pods.go:61] "etcd-embed-certs-706500" [acbf9245-d4f8-4fa6-88a7-4f891f9f8403] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0410 22:49:00.657704   58186 system_pods.go:61] "kube-apiserver-embed-certs-706500" [b9c79d1d-f571-4ed8-a68f-512e8a2a1705] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0410 22:49:00.657709   58186 system_pods.go:61] "kube-controller-manager-embed-certs-706500" [d229b85d-9a8d-4cd0-ac48-a6aea3769581] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0410 22:49:00.657715   58186 system_pods.go:61] "kube-proxy-8kzff" [ce35a33f-1697-44a7-ad64-83895236bc6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0410 22:49:00.657720   58186 system_pods.go:61] "kube-scheduler-embed-certs-706500" [72c68a6c-beba-48a5-937b-51c40aab0386] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0410 22:49:00.657726   58186 system_pods.go:61] "metrics-server-57f55c9bc5-4r9pl" [40a91fc1-9e0a-4bcc-a2e9-65e9f2d2b960] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:49:00.657733   58186 system_pods.go:61] "storage-provisioner" [10f7637e-e6e0-4f04-b1eb-ac3bd205064f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0410 22:49:00.657742   58186 system_pods.go:74] duration metric: took 9.141859ms to wait for pod list to return data ...
	I0410 22:49:00.657752   58186 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:49:00.662255   58186 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:49:00.662300   58186 node_conditions.go:123] node cpu capacity is 2
	I0410 22:49:00.662315   58186 node_conditions.go:105] duration metric: took 4.553643ms to run NodePressure ...
	I0410 22:49:00.662338   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:00.957923   58186 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0410 22:49:00.962553   58186 kubeadm.go:733] kubelet initialised
	I0410 22:49:00.962575   58186 kubeadm.go:734] duration metric: took 4.616848ms waiting for restarted kubelet to initialise ...
	I0410 22:49:00.962585   58186 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:49:00.968387   58186 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-j4kj8" in "kube-system" namespace to be "Ready" ...
	I0410 22:48:59.395416   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:59.395864   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:59.395893   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:59.395819   59362 retry.go:31] will retry after 2.378137008s: waiting for machine to come up
	I0410 22:49:01.776037   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:01.776587   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:49:01.776641   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:49:01.776526   59362 retry.go:31] will retry after 4.360839049s: waiting for machine to come up
	I0410 22:49:00.585234   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:01.084884   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:01.585066   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:02.085697   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:02.585573   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:03.085552   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:03.585521   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:04.084919   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:04.584802   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:05.085266   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:02.975009   58186 pod_ready.go:102] pod "coredns-76f75df574-j4kj8" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:04.976854   58186 pod_ready.go:102] pod "coredns-76f75df574-j4kj8" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:06.141509   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.142008   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Found IP for machine: 192.168.72.170
	I0410 22:49:06.142037   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has current primary IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.142047   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Reserving static IP address...
	I0410 22:49:06.142422   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Reserved static IP address: 192.168.72.170
	I0410 22:49:06.142451   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for SSH to be available...
	I0410 22:49:06.142476   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-519831", mac: "52:54:00:dc:67:d5", ip: "192.168.72.170"} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.142499   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | skip adding static IP to network mk-default-k8s-diff-port-519831 - found existing host DHCP lease matching {name: "default-k8s-diff-port-519831", mac: "52:54:00:dc:67:d5", ip: "192.168.72.170"}
	I0410 22:49:06.142518   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Getting to WaitForSSH function...
	I0410 22:49:06.144878   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.145206   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.145238   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.145326   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Using SSH client type: external
	I0410 22:49:06.145365   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Using SSH private key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa (-rw-------)
	I0410 22:49:06.145401   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.170 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0410 22:49:06.145421   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | About to run SSH command:
	I0410 22:49:06.145438   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | exit 0
	I0410 22:49:06.272546   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | SSH cmd err, output: <nil>: 
	I0410 22:49:06.272919   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetConfigRaw
	I0410 22:49:06.273605   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetIP
	I0410 22:49:06.276234   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.276610   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.276644   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.276851   58701 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/config.json ...
	I0410 22:49:06.277100   58701 machine.go:94] provisionDockerMachine start ...
	I0410 22:49:06.277127   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:06.277400   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:06.279729   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.280107   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.280146   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.280295   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:06.280480   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.280658   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.280794   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:06.280939   58701 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:06.281121   58701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0410 22:49:06.281138   58701 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 22:49:06.385219   58701 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0410 22:49:06.385254   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetMachineName
	I0410 22:49:06.385498   58701 buildroot.go:166] provisioning hostname "default-k8s-diff-port-519831"
	I0410 22:49:06.385527   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetMachineName
	I0410 22:49:06.385716   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:06.388422   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.388922   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.388963   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.389072   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:06.389292   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.389462   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.389600   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:06.389751   58701 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:06.389924   58701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0410 22:49:06.389938   58701 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-519831 && echo "default-k8s-diff-port-519831" | sudo tee /etc/hostname
	I0410 22:49:06.507221   58701 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-519831
	
	I0410 22:49:06.507252   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:06.509837   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.510179   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.510225   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.510385   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:06.510561   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.510736   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.510880   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:06.511040   58701 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:06.511236   58701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0410 22:49:06.511262   58701 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-519831' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-519831/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-519831' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:49:06.626097   58701 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:49:06.626129   58701 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:49:06.626153   58701 buildroot.go:174] setting up certificates
	I0410 22:49:06.626163   58701 provision.go:84] configureAuth start
	I0410 22:49:06.626173   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetMachineName
	I0410 22:49:06.626499   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetIP
	I0410 22:49:06.629067   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.629412   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.629450   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.629559   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:06.632132   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.632517   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.632548   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.632674   58701 provision.go:143] copyHostCerts
	I0410 22:49:06.632734   58701 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:49:06.632755   58701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:49:06.632822   58701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:49:06.633021   58701 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:49:06.633037   58701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:49:06.633078   58701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:49:06.633179   58701 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:49:06.633191   58701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:49:06.633223   58701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:49:06.633295   58701 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-519831 san=[127.0.0.1 192.168.72.170 default-k8s-diff-port-519831 localhost minikube]
	I0410 22:49:06.835016   58701 provision.go:177] copyRemoteCerts
	I0410 22:49:06.835077   58701 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:49:06.835104   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:06.837769   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.838124   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.838152   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.838327   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:06.838519   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.838669   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:06.838808   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:06.921929   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:49:06.947855   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0410 22:49:06.972865   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0410 22:49:06.999630   58701 provision.go:87] duration metric: took 373.45654ms to configureAuth
	I0410 22:49:06.999658   58701 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:49:06.999872   58701 config.go:182] Loaded profile config "default-k8s-diff-port-519831": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:49:06.999942   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:07.003015   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.003418   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.003452   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.003623   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:07.003793   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.003946   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.004062   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:07.004208   58701 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:07.004425   58701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0410 22:49:07.004448   58701 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:49:07.273568   58701 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:49:07.273601   58701 machine.go:97] duration metric: took 996.483382ms to provisionDockerMachine
	I0410 22:49:07.273618   58701 start.go:293] postStartSetup for "default-k8s-diff-port-519831" (driver="kvm2")
	I0410 22:49:07.273634   58701 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:49:07.273660   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:07.274009   58701 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:49:07.274040   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:07.276736   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.277132   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.277155   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.277354   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:07.277537   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.277740   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:07.277891   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:07.361056   58701 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:49:07.365729   58701 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:49:07.365759   58701 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:49:07.365834   58701 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:49:07.365935   58701 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:49:07.366064   58701 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:49:07.376754   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:49:07.509384   57270 start.go:364] duration metric: took 56.035567079s to acquireMachinesLock for "no-preload-646133"
	I0410 22:49:07.509424   57270 start.go:96] Skipping create...Using existing machine configuration
	I0410 22:49:07.509432   57270 fix.go:54] fixHost starting: 
	I0410 22:49:07.509837   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:07.509872   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:07.526882   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46721
	I0410 22:49:07.527337   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:07.527780   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:49:07.527801   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:07.528077   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:07.528238   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:07.528366   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetState
	I0410 22:49:07.529732   57270 fix.go:112] recreateIfNeeded on no-preload-646133: state=Stopped err=<nil>
	I0410 22:49:07.529755   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	W0410 22:49:07.529878   57270 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 22:49:07.531875   57270 out.go:177] * Restarting existing kvm2 VM for "no-preload-646133" ...
	I0410 22:49:07.402691   58701 start.go:296] duration metric: took 129.059293ms for postStartSetup
	I0410 22:49:07.402731   58701 fix.go:56] duration metric: took 20.99318672s for fixHost
	I0410 22:49:07.402751   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:07.405634   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.405955   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.405996   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.406161   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:07.406378   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.406537   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.406647   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:07.406826   58701 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:07.407062   58701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0410 22:49:07.407079   58701 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 22:49:07.509210   58701 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712789347.471050157
	
	I0410 22:49:07.509233   58701 fix.go:216] guest clock: 1712789347.471050157
	I0410 22:49:07.509241   58701 fix.go:229] Guest: 2024-04-10 22:49:07.471050157 +0000 UTC Remote: 2024-04-10 22:49:07.402735415 +0000 UTC m=+140.054227768 (delta=68.314742ms)
	I0410 22:49:07.509287   58701 fix.go:200] guest clock delta is within tolerance: 68.314742ms
	I0410 22:49:07.509297   58701 start.go:83] releasing machines lock for "default-k8s-diff-port-519831", held for 21.099785205s
	I0410 22:49:07.509328   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:07.509613   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetIP
	I0410 22:49:07.512255   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.512634   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.512667   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.512827   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:07.513364   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:07.513531   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:07.513610   58701 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:49:07.513649   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:07.513750   58701 ssh_runner.go:195] Run: cat /version.json
	I0410 22:49:07.513771   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:07.516338   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.516685   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.516776   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.516802   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.516951   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:07.517142   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.517161   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.517173   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.517310   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:07.517355   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:07.517470   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.517602   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:07.517604   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:07.517765   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:07.594218   58701 ssh_runner.go:195] Run: systemctl --version
	I0410 22:49:07.633783   58701 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:49:07.790430   58701 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 22:49:07.797279   58701 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:49:07.797358   58701 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:49:07.815457   58701 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0410 22:49:07.815488   58701 start.go:494] detecting cgroup driver to use...
	I0410 22:49:07.815561   58701 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:49:07.833038   58701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:49:07.848577   58701 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:49:07.848648   58701 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:49:07.863609   58701 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:49:07.878299   58701 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:49:07.999388   58701 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:49:08.155534   58701 docker.go:233] disabling docker service ...
	I0410 22:49:08.155613   58701 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:49:08.175545   58701 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:49:08.195923   58701 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:49:08.340282   58701 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:49:08.485647   58701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:49:08.500245   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:49:08.520493   58701 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0410 22:49:08.520582   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.535455   58701 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:49:08.535521   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.547058   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.559638   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.571374   58701 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:49:08.583796   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.598091   58701 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.622634   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.633858   58701 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:49:08.645114   58701 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0410 22:49:08.645167   58701 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0410 22:49:08.660204   58701 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:49:08.671345   58701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:49:08.804523   58701 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:49:08.953644   58701 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 22:49:08.953717   58701 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 22:49:08.958661   58701 start.go:562] Will wait 60s for crictl version
	I0410 22:49:08.958715   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:49:08.962938   58701 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 22:49:09.006335   58701 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 22:49:09.006425   58701 ssh_runner.go:195] Run: crio --version
	I0410 22:49:09.037315   58701 ssh_runner.go:195] Run: crio --version
	I0410 22:49:09.069366   58701 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0410 22:49:07.533174   57270 main.go:141] libmachine: (no-preload-646133) Calling .Start
	I0410 22:49:07.533352   57270 main.go:141] libmachine: (no-preload-646133) Ensuring networks are active...
	I0410 22:49:07.534117   57270 main.go:141] libmachine: (no-preload-646133) Ensuring network default is active
	I0410 22:49:07.534413   57270 main.go:141] libmachine: (no-preload-646133) Ensuring network mk-no-preload-646133 is active
	I0410 22:49:07.534851   57270 main.go:141] libmachine: (no-preload-646133) Getting domain xml...
	I0410 22:49:07.535553   57270 main.go:141] libmachine: (no-preload-646133) Creating domain...
	I0410 22:49:08.844990   57270 main.go:141] libmachine: (no-preload-646133) Waiting to get IP...
	I0410 22:49:08.845908   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:08.846363   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:08.846459   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:08.846332   59513 retry.go:31] will retry after 241.150391ms: waiting for machine to come up
	I0410 22:49:09.088961   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:09.089455   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:09.089489   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:09.089417   59513 retry.go:31] will retry after 349.96397ms: waiting for machine to come up
	I0410 22:49:09.441226   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:09.441799   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:09.441828   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:09.441754   59513 retry.go:31] will retry after 444.576999ms: waiting for machine to come up
	I0410 22:49:05.585408   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:06.085250   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:06.585503   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:07.085422   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:07.584909   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:08.084863   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:08.585859   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:09.085175   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:09.585660   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:10.085221   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:07.475385   58186 pod_ready.go:92] pod "coredns-76f75df574-j4kj8" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:07.475414   58186 pod_ready.go:81] duration metric: took 6.506993581s for pod "coredns-76f75df574-j4kj8" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:07.475424   58186 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:09.486133   58186 pod_ready.go:102] pod "etcd-embed-certs-706500" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:11.483972   58186 pod_ready.go:92] pod "etcd-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:11.483994   58186 pod_ready.go:81] duration metric: took 4.008564427s for pod "etcd-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.484005   58186 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.490340   58186 pod_ready.go:92] pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:11.490380   58186 pod_ready.go:81] duration metric: took 6.362017ms for pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.490399   58186 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.497078   58186 pod_ready.go:92] pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:11.497110   58186 pod_ready.go:81] duration metric: took 6.701645ms for pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.497124   58186 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8kzff" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.504091   58186 pod_ready.go:92] pod "kube-proxy-8kzff" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:11.504118   58186 pod_ready.go:81] duration metric: took 6.985136ms for pod "kube-proxy-8kzff" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.504132   58186 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.510619   58186 pod_ready.go:92] pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:11.510656   58186 pod_ready.go:81] duration metric: took 6.513031ms for pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.510674   58186 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:09.070592   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetIP
	I0410 22:49:09.073850   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:09.074163   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:09.074190   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:09.074388   58701 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0410 22:49:09.079170   58701 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:49:09.093764   58701 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-519831 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-519831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.170 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 22:49:09.093973   58701 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 22:49:09.094040   58701 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:49:09.140874   58701 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0410 22:49:09.140951   58701 ssh_runner.go:195] Run: which lz4
	I0410 22:49:09.146775   58701 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0410 22:49:09.152876   58701 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0410 22:49:09.152917   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0410 22:49:10.827934   58701 crio.go:462] duration metric: took 1.681191787s to copy over tarball
	I0410 22:49:10.828019   58701 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0410 22:49:09.888688   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:09.892576   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:09.892607   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:09.889179   59513 retry.go:31] will retry after 560.585608ms: waiting for machine to come up
	I0410 22:49:10.451001   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:10.451630   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:10.451663   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:10.451590   59513 retry.go:31] will retry after 601.519186ms: waiting for machine to come up
	I0410 22:49:11.054324   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:11.054664   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:11.054693   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:11.054653   59513 retry.go:31] will retry after 750.183717ms: waiting for machine to come up
	I0410 22:49:11.805908   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:11.806303   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:11.806331   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:11.806254   59513 retry.go:31] will retry after 883.805148ms: waiting for machine to come up
	I0410 22:49:12.691316   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:12.691861   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:12.691893   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:12.691804   59513 retry.go:31] will retry after 1.39605629s: waiting for machine to come up
	I0410 22:49:14.090350   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:14.090795   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:14.090821   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:14.090753   59513 retry.go:31] will retry after 1.388324423s: waiting for machine to come up
	I0410 22:49:10.585333   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:11.085466   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:11.585062   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:12.085191   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:12.585644   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:13.085615   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:13.585355   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:14.085270   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:14.584868   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:15.085639   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:13.521844   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:16.041569   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:13.328492   58701 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.500439721s)
	I0410 22:49:13.328534   58701 crio.go:469] duration metric: took 2.500564923s to extract the tarball
	I0410 22:49:13.328545   58701 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0410 22:49:13.367568   58701 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:49:13.415759   58701 crio.go:514] all images are preloaded for cri-o runtime.
	I0410 22:49:13.415780   58701 cache_images.go:84] Images are preloaded, skipping loading
	I0410 22:49:13.415788   58701 kubeadm.go:928] updating node { 192.168.72.170 8444 v1.29.3 crio true true} ...
	I0410 22:49:13.415899   58701 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-519831 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-519831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 22:49:13.415982   58701 ssh_runner.go:195] Run: crio config
	I0410 22:49:13.473019   58701 cni.go:84] Creating CNI manager for ""
	I0410 22:49:13.473046   58701 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:49:13.473063   58701 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 22:49:13.473100   58701 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.170 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-519831 NodeName:default-k8s-diff-port-519831 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0410 22:49:13.473261   58701 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.170
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-519831"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.170
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.170"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 22:49:13.473325   58701 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0410 22:49:13.487302   58701 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 22:49:13.487368   58701 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 22:49:13.498496   58701 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0410 22:49:13.518312   58701 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0410 22:49:13.537972   58701 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0410 22:49:13.558714   58701 ssh_runner.go:195] Run: grep 192.168.72.170	control-plane.minikube.internal$ /etc/hosts
	I0410 22:49:13.562886   58701 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.170	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:49:13.575957   58701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:49:13.706316   58701 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:49:13.725898   58701 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831 for IP: 192.168.72.170
	I0410 22:49:13.725924   58701 certs.go:194] generating shared ca certs ...
	I0410 22:49:13.725944   58701 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:49:13.726119   58701 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 22:49:13.726173   58701 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 22:49:13.726185   58701 certs.go:256] generating profile certs ...
	I0410 22:49:13.726297   58701 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/client.key
	I0410 22:49:13.726398   58701 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/apiserver.key.ff579077
	I0410 22:49:13.726454   58701 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/proxy-client.key
	I0410 22:49:13.726606   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 22:49:13.726644   58701 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 22:49:13.726656   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 22:49:13.726685   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 22:49:13.726725   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 22:49:13.726756   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 22:49:13.726811   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:49:13.727747   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 22:49:13.780060   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 22:49:13.818446   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 22:49:13.865986   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 22:49:13.897578   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0410 22:49:13.937123   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0410 22:49:13.970558   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 22:49:13.997678   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0410 22:49:14.025173   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 22:49:14.051190   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 22:49:14.079109   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 22:49:14.107547   58701 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 22:49:14.128029   58701 ssh_runner.go:195] Run: openssl version
	I0410 22:49:14.134686   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 22:49:14.148733   58701 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:14.154057   58701 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:14.154114   58701 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:14.160626   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 22:49:14.174406   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 22:49:14.187513   58701 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 22:49:14.193279   58701 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:49:14.193344   58701 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 22:49:14.199518   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 22:49:14.213538   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 22:49:14.225618   58701 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 22:49:14.230610   58701 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:49:14.230666   58701 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 22:49:14.236756   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 22:49:14.250041   58701 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:49:14.255320   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0410 22:49:14.262821   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0410 22:49:14.268854   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0410 22:49:14.275152   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0410 22:49:14.281598   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0410 22:49:14.287895   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0410 22:49:14.294125   58701 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-519831 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-519831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.170 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:49:14.294246   58701 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 22:49:14.294301   58701 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:49:14.332192   58701 cri.go:89] found id: ""
	I0410 22:49:14.332268   58701 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0410 22:49:14.343174   58701 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0410 22:49:14.343198   58701 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0410 22:49:14.343205   58701 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0410 22:49:14.343261   58701 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0410 22:49:14.355648   58701 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0410 22:49:14.357310   58701 kubeconfig.go:125] found "default-k8s-diff-port-519831" server: "https://192.168.72.170:8444"
	I0410 22:49:14.360713   58701 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0410 22:49:14.371972   58701 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.170
	I0410 22:49:14.372011   58701 kubeadm.go:1154] stopping kube-system containers ...
	I0410 22:49:14.372025   58701 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0410 22:49:14.372083   58701 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:49:14.410517   58701 cri.go:89] found id: ""
	I0410 22:49:14.410594   58701 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0410 22:49:14.428686   58701 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:49:14.443256   58701 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:49:14.443281   58701 kubeadm.go:156] found existing configuration files:
	
	I0410 22:49:14.443353   58701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0410 22:49:14.455086   58701 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:49:14.455156   58701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:49:14.466151   58701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0410 22:49:14.476799   58701 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:49:14.476852   58701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:49:14.487588   58701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0410 22:49:14.498476   58701 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:49:14.498534   58701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:49:14.509248   58701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0410 22:49:14.520223   58701 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:49:14.520287   58701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:49:14.531388   58701 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:49:14.542775   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:14.673733   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:15.773338   58701 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.099570437s)
	I0410 22:49:15.773385   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:15.985355   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:16.052996   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:16.126251   58701 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:49:16.126362   58701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:16.626615   58701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:17.127289   58701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:17.166269   58701 api_server.go:72] duration metric: took 1.040013076s to wait for apiserver process to appear ...
	I0410 22:49:17.166315   58701 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:49:17.166339   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:17.166964   58701 api_server.go:269] stopped: https://192.168.72.170:8444/healthz: Get "https://192.168.72.170:8444/healthz": dial tcp 192.168.72.170:8444: connect: connection refused
	I0410 22:49:15.480947   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:15.481358   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:15.481386   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:15.481309   59513 retry.go:31] will retry after 2.276682979s: waiting for machine to come up
	I0410 22:49:17.759404   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:17.759931   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:17.759975   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:17.759887   59513 retry.go:31] will retry after 2.254373826s: waiting for machine to come up
	I0410 22:49:15.585476   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:16.085404   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:16.585123   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:17.085713   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:17.584877   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:18.085601   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:18.585222   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:19.084891   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:19.585215   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:20.085668   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:18.519156   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:20.520053   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:17.667248   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:20.709507   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0410 22:49:20.709538   58701 api_server.go:103] status: https://192.168.72.170:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0410 22:49:20.709554   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:20.740392   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:49:20.740483   58701 api_server.go:103] status: https://192.168.72.170:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:49:21.166658   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:21.174343   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:49:21.174378   58701 api_server.go:103] status: https://192.168.72.170:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:49:21.667345   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:21.685078   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:49:21.685112   58701 api_server.go:103] status: https://192.168.72.170:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:49:22.166644   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:22.171611   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 200:
	ok
	I0410 22:49:22.178452   58701 api_server.go:141] control plane version: v1.29.3
	I0410 22:49:22.178484   58701 api_server.go:131] duration metric: took 5.012161431s to wait for apiserver health ...
	I0410 22:49:22.178493   58701 cni.go:84] Creating CNI manager for ""
	I0410 22:49:22.178499   58701 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:49:22.180370   58701 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0410 22:49:22.181768   58701 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0410 22:49:22.197462   58701 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0410 22:49:22.218348   58701 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:49:22.236800   58701 system_pods.go:59] 8 kube-system pods found
	I0410 22:49:22.236830   58701 system_pods.go:61] "coredns-76f75df574-ghnvx" [88ebd9b0-ecf0-4037-b5b0-547dad2354ba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0410 22:49:22.236837   58701 system_pods.go:61] "etcd-default-k8s-diff-port-519831" [e03c3075-b377-41a8-8f68-5a424fafd6a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0410 22:49:22.236843   58701 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-519831" [a538137b-08ce-4feb-a420-2e3ad7125b14] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0410 22:49:22.236849   58701 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-519831" [05d154e2-69cf-40dc-a4ba-ed0d65be4365] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0410 22:49:22.236861   58701 system_pods.go:61] "kube-proxy-5mbwx" [44724487-9539-4079-9fd6-40cb70208b95] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0410 22:49:22.236866   58701 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-519831" [a587ef20-140c-40b0-b306-d5f5c595f4a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0410 22:49:22.236871   58701 system_pods.go:61] "metrics-server-57f55c9bc5-9l2hc" [2f5cda2f-4d8f-4798-954e-5ef588f2b88f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:49:22.236876   58701 system_pods.go:61] "storage-provisioner" [e4e09f42-54ba-480e-a020-1ca071a54558] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0410 22:49:22.236884   58701 system_pods.go:74] duration metric: took 18.510987ms to wait for pod list to return data ...
	I0410 22:49:22.236893   58701 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:49:22.242143   58701 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:49:22.242167   58701 node_conditions.go:123] node cpu capacity is 2
	I0410 22:49:22.242177   58701 node_conditions.go:105] duration metric: took 5.279415ms to run NodePressure ...
	I0410 22:49:22.242192   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:22.532741   58701 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0410 22:49:22.537418   58701 kubeadm.go:733] kubelet initialised
	I0410 22:49:22.537444   58701 kubeadm.go:734] duration metric: took 4.675489ms waiting for restarted kubelet to initialise ...
	I0410 22:49:22.537453   58701 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:49:22.543364   58701 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-ghnvx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:22.549161   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "coredns-76f75df574-ghnvx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.549186   58701 pod_ready.go:81] duration metric: took 5.796619ms for pod "coredns-76f75df574-ghnvx" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:22.549196   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "coredns-76f75df574-ghnvx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.549207   58701 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:22.554131   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.554156   58701 pod_ready.go:81] duration metric: took 4.941026ms for pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:22.554165   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.554172   58701 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:22.558783   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.558812   58701 pod_ready.go:81] duration metric: took 4.633262ms for pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:22.558822   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.558828   58701 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:22.622314   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.622344   58701 pod_ready.go:81] duration metric: took 63.505681ms for pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:22.622356   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.622370   58701 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5mbwx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:23.022239   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "kube-proxy-5mbwx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.022266   58701 pod_ready.go:81] duration metric: took 399.888837ms for pod "kube-proxy-5mbwx" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:23.022275   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "kube-proxy-5mbwx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.022286   58701 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:23.422213   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.422245   58701 pod_ready.go:81] duration metric: took 399.950443ms for pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:23.422257   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.422270   58701 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:23.823832   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.823858   58701 pod_ready.go:81] duration metric: took 401.581123ms for pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:23.823868   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.823875   58701 pod_ready.go:38] duration metric: took 1.286413141s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:49:23.823889   58701 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0410 22:49:23.840663   58701 ops.go:34] apiserver oom_adj: -16
	I0410 22:49:23.840691   58701 kubeadm.go:591] duration metric: took 9.497479077s to restartPrimaryControlPlane
	I0410 22:49:23.840702   58701 kubeadm.go:393] duration metric: took 9.546582608s to StartCluster
	I0410 22:49:23.840718   58701 settings.go:142] acquiring lock: {Name:mk5dc8e9a07a91433645b19ffba859d70c73be71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:49:23.840795   58701 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:49:23.843350   58701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/kubeconfig: {Name:mkd6f498bec10d7b0d2291f83f5e27766227bc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:49:23.843613   58701 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.170 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0410 22:49:23.845385   58701 out.go:177] * Verifying Kubernetes components...
	I0410 22:49:23.843685   58701 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0410 22:49:23.846686   58701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:49:23.845421   58701 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-519831"
	I0410 22:49:23.846834   58701 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-519831"
	I0410 22:49:23.843826   58701 config.go:182] Loaded profile config "default-k8s-diff-port-519831": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	W0410 22:49:23.846852   58701 addons.go:243] addon storage-provisioner should already be in state true
	I0410 22:49:23.846901   58701 host.go:66] Checking if "default-k8s-diff-port-519831" exists ...
	I0410 22:49:23.845429   58701 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-519831"
	I0410 22:49:23.846969   58701 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-519831"
	I0410 22:49:23.845433   58701 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-519831"
	I0410 22:49:23.847069   58701 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-519831"
	W0410 22:49:23.847088   58701 addons.go:243] addon metrics-server should already be in state true
	I0410 22:49:23.847122   58701 host.go:66] Checking if "default-k8s-diff-port-519831" exists ...
	I0410 22:49:23.847349   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.847358   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.847381   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.847384   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.847495   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.847532   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.863090   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33089
	I0410 22:49:23.863240   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33493
	I0410 22:49:23.863685   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.863793   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.864315   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.864333   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.864356   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.864371   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.864741   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.864749   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.864949   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetState
	I0410 22:49:23.865210   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.865258   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.867599   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41111
	I0410 22:49:23.868035   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.868627   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.868652   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.868739   58701 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-519831"
	W0410 22:49:23.868757   58701 addons.go:243] addon default-storageclass should already be in state true
	I0410 22:49:23.868785   58701 host.go:66] Checking if "default-k8s-diff-port-519831" exists ...
	I0410 22:49:23.869023   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.869094   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.869136   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.869562   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.869630   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.881589   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41925
	I0410 22:49:23.881997   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.882429   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.882442   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.882719   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.882914   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetState
	I0410 22:49:23.884708   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:23.886865   58701 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0410 22:49:23.886946   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37555
	I0410 22:49:23.888493   58701 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0410 22:49:23.888511   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0410 22:49:23.888532   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:23.888850   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.889129   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39287
	I0410 22:49:23.889513   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.889536   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.889601   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.890020   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.890265   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.890285   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.890308   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetState
	I0410 22:49:23.890667   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.891458   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.891496   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.892090   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.892232   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:23.894143   58701 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:20.015689   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:20.016192   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:20.016230   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:20.016163   59513 retry.go:31] will retry after 2.611766259s: waiting for machine to come up
	I0410 22:49:22.629270   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:22.629704   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:22.629731   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:22.629644   59513 retry.go:31] will retry after 3.270808972s: waiting for machine to come up
	I0410 22:49:23.892695   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:23.892720   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:23.895489   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.895599   58701 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:49:23.895609   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0410 22:49:23.895623   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:23.896367   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:23.896558   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:23.896754   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:23.898964   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.899320   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:23.899355   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.899535   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:23.899715   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:23.899855   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:23.899999   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:23.910046   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35713
	I0410 22:49:23.910471   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.911056   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.911077   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.911445   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.911653   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetState
	I0410 22:49:23.913330   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:23.913603   58701 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0410 22:49:23.913619   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0410 22:49:23.913637   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:23.916303   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.916759   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:23.916820   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.916923   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:23.917137   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:23.917377   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:23.917517   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:24.067636   58701 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:49:24.087396   58701 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-519831" to be "Ready" ...
	I0410 22:49:24.204429   58701 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0410 22:49:24.204457   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0410 22:49:24.213319   58701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0410 22:49:24.224083   58701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:49:24.234156   58701 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0410 22:49:24.234182   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0410 22:49:24.273950   58701 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:49:24.273980   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0410 22:49:24.295822   58701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:49:24.580460   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:24.580498   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:24.580835   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:24.580853   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:24.580864   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:24.580872   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:24.580872   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Closing plugin on server side
	I0410 22:49:24.581102   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:24.581126   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:24.589648   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:24.589714   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:24.589981   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Closing plugin on server side
	I0410 22:49:24.590040   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:24.590062   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:25.339438   58701 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.043578779s)
	I0410 22:49:25.339489   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:25.339499   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:25.339451   58701 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.115333809s)
	I0410 22:49:25.339560   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:25.339593   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:25.339872   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:25.339897   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:25.339911   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:25.339924   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:25.339944   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Closing plugin on server side
	I0410 22:49:25.339956   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:25.339984   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:25.340004   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:25.340015   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:25.340149   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:25.340185   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:25.340203   58701 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-519831"
	I0410 22:49:25.341481   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:25.341497   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:25.344575   58701 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0410 22:49:20.585629   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:21.084898   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:21.585346   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:22.085672   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:22.585768   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:23.085613   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:23.585507   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:24.085104   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:24.585745   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:25.084858   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:23.017917   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:25.018591   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:27.019206   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:25.341622   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Closing plugin on server side
	I0410 22:49:25.345974   58701 addons.go:505] duration metric: took 1.502302613s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0410 22:49:26.094458   58701 node_ready.go:53] node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:25.904062   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:25.904580   57270 main.go:141] libmachine: (no-preload-646133) Found IP for machine: 192.168.50.17
	I0410 22:49:25.904608   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has current primary IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:25.904622   57270 main.go:141] libmachine: (no-preload-646133) Reserving static IP address...
	I0410 22:49:25.905076   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "no-preload-646133", mac: "52:54:00:35:62:0e", ip: "192.168.50.17"} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:25.905117   57270 main.go:141] libmachine: (no-preload-646133) DBG | skip adding static IP to network mk-no-preload-646133 - found existing host DHCP lease matching {name: "no-preload-646133", mac: "52:54:00:35:62:0e", ip: "192.168.50.17"}
	I0410 22:49:25.905134   57270 main.go:141] libmachine: (no-preload-646133) Reserved static IP address: 192.168.50.17
	I0410 22:49:25.905151   57270 main.go:141] libmachine: (no-preload-646133) Waiting for SSH to be available...
	I0410 22:49:25.905170   57270 main.go:141] libmachine: (no-preload-646133) DBG | Getting to WaitForSSH function...
	I0410 22:49:25.907397   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:25.907773   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:25.907796   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:25.907937   57270 main.go:141] libmachine: (no-preload-646133) DBG | Using SSH client type: external
	I0410 22:49:25.907960   57270 main.go:141] libmachine: (no-preload-646133) DBG | Using SSH private key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa (-rw-------)
	I0410 22:49:25.907979   57270 main.go:141] libmachine: (no-preload-646133) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.17 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0410 22:49:25.907989   57270 main.go:141] libmachine: (no-preload-646133) DBG | About to run SSH command:
	I0410 22:49:25.907997   57270 main.go:141] libmachine: (no-preload-646133) DBG | exit 0
	I0410 22:49:26.032683   57270 main.go:141] libmachine: (no-preload-646133) DBG | SSH cmd err, output: <nil>: 
	I0410 22:49:26.033065   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetConfigRaw
	I0410 22:49:26.033761   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetIP
	I0410 22:49:26.036545   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.036951   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.036982   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.037187   57270 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/config.json ...
	I0410 22:49:26.037403   57270 machine.go:94] provisionDockerMachine start ...
	I0410 22:49:26.037424   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:26.037655   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.039750   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.040081   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.040102   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.040285   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:26.040486   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.040657   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.040818   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:26.040972   57270 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:26.041180   57270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0410 22:49:26.041197   57270 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 22:49:26.149298   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0410 22:49:26.149335   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetMachineName
	I0410 22:49:26.149618   57270 buildroot.go:166] provisioning hostname "no-preload-646133"
	I0410 22:49:26.149647   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetMachineName
	I0410 22:49:26.149849   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.152432   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.152799   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.152829   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.152973   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:26.153233   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.153406   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.153571   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:26.153774   57270 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:26.153992   57270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0410 22:49:26.154010   57270 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-646133 && echo "no-preload-646133" | sudo tee /etc/hostname
	I0410 22:49:26.283760   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-646133
	
	I0410 22:49:26.283794   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.286605   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.286925   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.286955   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.287097   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:26.287277   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.287425   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.287551   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:26.287725   57270 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:26.287944   57270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0410 22:49:26.287969   57270 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-646133' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-646133/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-646133' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:49:26.402869   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:49:26.402905   57270 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:49:26.402945   57270 buildroot.go:174] setting up certificates
	I0410 22:49:26.402956   57270 provision.go:84] configureAuth start
	I0410 22:49:26.402973   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetMachineName
	I0410 22:49:26.403234   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetIP
	I0410 22:49:26.405718   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.406079   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.406119   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.406357   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.408549   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.408882   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.408917   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.409034   57270 provision.go:143] copyHostCerts
	I0410 22:49:26.409106   57270 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:49:26.409124   57270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:49:26.409177   57270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:49:26.409310   57270 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:49:26.409320   57270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:49:26.409341   57270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:49:26.409405   57270 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:49:26.409412   57270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:49:26.409430   57270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:49:26.409476   57270 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.no-preload-646133 san=[127.0.0.1 192.168.50.17 localhost minikube no-preload-646133]
	I0410 22:49:26.567556   57270 provision.go:177] copyRemoteCerts
	I0410 22:49:26.567611   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:49:26.567647   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.570205   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.570589   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.570614   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.570805   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:26.571034   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.571172   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:26.571294   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:49:26.655943   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:49:26.681691   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0410 22:49:26.706573   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0410 22:49:26.733054   57270 provision.go:87] duration metric: took 330.073783ms to configureAuth
	I0410 22:49:26.733088   57270 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:49:26.733276   57270 config.go:182] Loaded profile config "no-preload-646133": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.1
	I0410 22:49:26.733347   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.735910   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.736264   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.736295   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.736474   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:26.736648   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.736798   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.736925   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:26.737055   57270 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:26.737225   57270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0410 22:49:26.737241   57270 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:49:27.008174   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:49:27.008202   57270 machine.go:97] duration metric: took 970.785508ms to provisionDockerMachine
	I0410 22:49:27.008216   57270 start.go:293] postStartSetup for "no-preload-646133" (driver="kvm2")
	I0410 22:49:27.008236   57270 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:49:27.008263   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:27.008554   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:49:27.008580   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:27.011150   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.011561   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:27.011604   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.011900   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:27.012090   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:27.012274   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:27.012432   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:49:27.105247   57270 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:49:27.109842   57270 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:49:27.109868   57270 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:49:27.109927   57270 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:49:27.109993   57270 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:49:27.110080   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:49:27.121451   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:49:27.151797   57270 start.go:296] duration metric: took 143.569287ms for postStartSetup
	I0410 22:49:27.151836   57270 fix.go:56] duration metric: took 19.642403615s for fixHost
	I0410 22:49:27.151865   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:27.154454   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.154869   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:27.154903   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.154987   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:27.155193   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:27.155357   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:27.155512   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:27.155660   57270 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:27.155862   57270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0410 22:49:27.155875   57270 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 22:49:27.265609   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712789367.209761579
	
	I0410 22:49:27.265652   57270 fix.go:216] guest clock: 1712789367.209761579
	I0410 22:49:27.265662   57270 fix.go:229] Guest: 2024-04-10 22:49:27.209761579 +0000 UTC Remote: 2024-04-10 22:49:27.151840464 +0000 UTC m=+377.371052419 (delta=57.921115ms)
	I0410 22:49:27.265687   57270 fix.go:200] guest clock delta is within tolerance: 57.921115ms
	I0410 22:49:27.265697   57270 start.go:83] releasing machines lock for "no-preload-646133", held for 19.756293566s
	I0410 22:49:27.265724   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:27.265960   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetIP
	I0410 22:49:27.268735   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.269184   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:27.269216   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.269380   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:27.270014   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:27.270233   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:27.270331   57270 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:49:27.270376   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:27.270645   57270 ssh_runner.go:195] Run: cat /version.json
	I0410 22:49:27.270669   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:27.273542   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.273846   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.273986   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:27.274019   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.274140   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:27.274230   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:27.274259   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.274318   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:27.274400   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:27.274531   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:27.274536   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:27.274688   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:27.274723   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:49:27.274806   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:49:27.359922   57270 ssh_runner.go:195] Run: systemctl --version
	I0410 22:49:27.400885   57270 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:49:27.555260   57270 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 22:49:27.561275   57270 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:49:27.561333   57270 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:49:27.578478   57270 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0410 22:49:27.578502   57270 start.go:494] detecting cgroup driver to use...
	I0410 22:49:27.578567   57270 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:49:27.598020   57270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:49:27.613068   57270 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:49:27.613140   57270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:49:27.629253   57270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:49:27.644130   57270 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:49:27.791801   57270 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:49:27.952366   57270 docker.go:233] disabling docker service ...
	I0410 22:49:27.952477   57270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:49:27.968629   57270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:49:27.982330   57270 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:49:28.117396   57270 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:49:28.240808   57270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:49:28.257299   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:49:28.280918   57270 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0410 22:49:28.280991   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.296415   57270 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:49:28.296480   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.308602   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.319535   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.329812   57270 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:49:28.341466   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.354706   57270 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.374405   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.385094   57270 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:49:28.394412   57270 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0410 22:49:28.394466   57270 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0410 22:49:28.407654   57270 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:49:28.418381   57270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:49:28.525783   57270 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:49:28.678643   57270 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 22:49:28.678706   57270 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 22:49:28.683681   57270 start.go:562] Will wait 60s for crictl version
	I0410 22:49:28.683737   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:28.687703   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 22:49:28.725311   57270 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 22:49:28.725414   57270 ssh_runner.go:195] Run: crio --version
	I0410 22:49:28.755393   57270 ssh_runner.go:195] Run: crio --version
	I0410 22:49:28.788963   57270 out.go:177] * Preparing Kubernetes v1.30.0-rc.1 on CRI-O 1.29.1 ...
	I0410 22:49:28.790274   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetIP
	I0410 22:49:28.793091   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:28.793418   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:28.793452   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:28.793659   57270 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0410 22:49:28.798916   57270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:49:28.814575   57270 kubeadm.go:877] updating cluster {Name:no-preload-646133 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.1 ClusterName:no-preload-646133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.17 Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 22:49:28.814689   57270 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.1 and runtime crio
	I0410 22:49:28.814717   57270 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:49:28.852604   57270 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.1". assuming images are not preloaded.
	I0410 22:49:28.852627   57270 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-rc.1 registry.k8s.io/kube-controller-manager:v1.30.0-rc.1 registry.k8s.io/kube-scheduler:v1.30.0-rc.1 registry.k8s.io/kube-proxy:v1.30.0-rc.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0410 22:49:28.852698   57270 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:28.852707   57270 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.1
	I0410 22:49:28.852733   57270 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0410 22:49:28.852756   57270 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0410 22:49:28.852803   57270 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.1
	I0410 22:49:28.852870   57270 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.1
	I0410 22:49:28.852890   57270 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.1
	I0410 22:49:28.852917   57270 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0410 22:49:28.854348   57270 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.1
	I0410 22:49:28.854354   57270 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.1
	I0410 22:49:28.854378   57270 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0410 22:49:28.854419   57270 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0410 22:49:28.854421   57270 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.1
	I0410 22:49:28.854355   57270 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.1
	I0410 22:49:28.854353   57270 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:28.854740   57270 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0410 22:49:29.066608   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0410 22:49:29.072486   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-rc.1
	I0410 22:49:29.073347   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0410 22:49:29.075270   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0410 22:49:29.082649   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-rc.1
	I0410 22:49:29.085737   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-rc.1
	I0410 22:49:29.093699   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-rc.1
	I0410 22:49:29.290780   57270 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-rc.1" does not exist at hash "ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b" in container runtime
	I0410 22:49:29.290810   57270 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0410 22:49:29.290839   57270 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0410 22:49:29.290837   57270 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-rc.1
	I0410 22:49:29.290849   57270 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0410 22:49:29.290871   57270 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0410 22:49:29.290882   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.290902   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.290882   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.304346   57270 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-rc.1" does not exist at hash "69f89e5e13a4119f8752cd7f0fb30e3db4ce480a5e08580a3bf72597464bd061" in container runtime
	I0410 22:49:29.304409   57270 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-rc.1
	I0410 22:49:29.304459   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.304510   57270 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-rc.1" does not exist at hash "bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895" in container runtime
	I0410 22:49:29.304599   57270 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-rc.1
	I0410 22:49:29.304635   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.304563   57270 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-rc.1" does not exist at hash "577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090" in container runtime
	I0410 22:49:29.304689   57270 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.1
	I0410 22:49:29.304738   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.311219   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0410 22:49:29.311264   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.1
	I0410 22:49:29.311311   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0410 22:49:29.324663   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-rc.1
	I0410 22:49:29.324770   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.1
	I0410 22:49:29.324855   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-rc.1
	I0410 22:49:29.442426   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0410 22:49:29.442541   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0410 22:49:29.458416   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0410 22:49:29.458526   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0410 22:49:29.468890   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.1
	I0410 22:49:29.468998   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.1
	I0410 22:49:29.481365   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.1
	I0410 22:49:29.481482   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.1
	I0410 22:49:29.498862   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.1
	I0410 22:49:29.498899   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0410 22:49:29.498913   57270 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0410 22:49:29.498927   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.1
	I0410 22:49:29.498951   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.1 (exists)
	I0410 22:49:29.498957   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0410 22:49:29.498964   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.1
	I0410 22:49:29.498982   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-rc.1 (exists)
	I0410 22:49:29.499012   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.1
	I0410 22:49:29.498926   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0410 22:49:29.507249   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.1 (exists)
	I0410 22:49:29.507282   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.1 (exists)
	I0410 22:49:29.751612   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:25.585095   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:26.085119   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:26.585846   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:27.084920   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:27.585251   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:28.084926   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:28.585643   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:29.084937   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:29.585666   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:30.085088   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:29.518476   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:31.518837   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:28.592323   58701 node_ready.go:53] node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:31.098027   58701 node_ready.go:53] node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:31.591789   58701 node_ready.go:49] node "default-k8s-diff-port-519831" has status "Ready":"True"
	I0410 22:49:31.591822   58701 node_ready.go:38] duration metric: took 7.504383585s for node "default-k8s-diff-port-519831" to be "Ready" ...
	I0410 22:49:31.591835   58701 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:49:31.599103   58701 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-ghnvx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:31.607758   58701 pod_ready.go:92] pod "coredns-76f75df574-ghnvx" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:31.607787   58701 pod_ready.go:81] duration metric: took 8.655521ms for pod "coredns-76f75df574-ghnvx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:31.607801   58701 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:33.690936   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.191950196s)
	I0410 22:49:33.690965   57270 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.939318786s)
	I0410 22:49:33.691014   57270 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0410 22:49:33.691045   57270 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:33.690973   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0410 22:49:33.691091   57270 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.1
	I0410 22:49:33.691101   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:33.691163   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.1
	I0410 22:49:33.695868   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:30.585515   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:31.085273   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:31.585347   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:32.084892   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:32.585361   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:33.085648   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:33.585256   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:34.084938   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:34.585005   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:35.085466   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:34.018733   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:36.019904   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:33.615785   58701 pod_ready.go:102] pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:35.115811   58701 pod_ready.go:92] pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:35.115846   58701 pod_ready.go:81] duration metric: took 3.508038321s for pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:35.115856   58701 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.123593   58701 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:37.123624   58701 pod_ready.go:81] duration metric: took 2.007760022s for pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.123638   58701 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.130390   58701 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:37.130421   58701 pod_ready.go:81] duration metric: took 6.771239ms for pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.130436   58701 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5mbwx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.136219   58701 pod_ready.go:92] pod "kube-proxy-5mbwx" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:37.136253   58701 pod_ready.go:81] duration metric: took 5.809077ms for pod "kube-proxy-5mbwx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.136265   58701 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.142909   58701 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:37.142939   58701 pod_ready.go:81] duration metric: took 6.664922ms for pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.142954   58701 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:35.767190   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.1: (2.075997626s)
	I0410 22:49:35.767227   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.1 from cache
	I0410 22:49:35.767261   57270 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-rc.1
	I0410 22:49:35.767278   57270 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.071386498s)
	I0410 22:49:35.767326   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.1
	I0410 22:49:35.767327   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0410 22:49:35.767497   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0410 22:49:35.773679   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0410 22:49:37.666289   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.1: (1.898906389s)
	I0410 22:49:37.666326   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.1 from cache
	I0410 22:49:37.666358   57270 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0410 22:49:37.666422   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0410 22:49:39.652778   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.986322091s)
	I0410 22:49:39.652820   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0410 22:49:39.652855   57270 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.1
	I0410 22:49:39.652951   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.1
	I0410 22:49:35.585228   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:36.085699   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:36.585690   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:37.085760   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:37.584867   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:37.584947   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:37.625964   57719 cri.go:89] found id: ""
	I0410 22:49:37.625989   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.625996   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:37.626001   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:37.626046   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:37.669151   57719 cri.go:89] found id: ""
	I0410 22:49:37.669178   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.669188   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:37.669194   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:37.669242   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:37.711426   57719 cri.go:89] found id: ""
	I0410 22:49:37.711456   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.711466   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:37.711474   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:37.711538   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:37.754678   57719 cri.go:89] found id: ""
	I0410 22:49:37.754707   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.754719   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:37.754726   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:37.754809   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:37.795259   57719 cri.go:89] found id: ""
	I0410 22:49:37.795291   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.795301   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:37.795307   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:37.795375   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:37.836961   57719 cri.go:89] found id: ""
	I0410 22:49:37.836994   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.837004   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:37.837011   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:37.837075   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:37.876195   57719 cri.go:89] found id: ""
	I0410 22:49:37.876223   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.876233   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:37.876239   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:37.876290   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:37.911688   57719 cri.go:89] found id: ""
	I0410 22:49:37.911715   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.911725   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:37.911736   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:37.911751   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:37.954690   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:37.954734   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:38.006731   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:38.006771   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:38.024290   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:38.024314   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:38.148504   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:38.148529   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:38.148561   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:38.519483   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:40.520822   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:39.150543   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:41.151300   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:42.217749   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.1: (2.564772479s)
	I0410 22:49:42.217778   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.1 from cache
	I0410 22:49:42.217802   57270 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.1
	I0410 22:49:42.217843   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.1
	I0410 22:49:44.577826   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.1: (2.359955682s)
	I0410 22:49:44.577865   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.1 from cache
	I0410 22:49:44.577892   57270 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0410 22:49:44.577940   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0410 22:49:40.726314   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:40.743098   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:40.743168   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:40.794673   57719 cri.go:89] found id: ""
	I0410 22:49:40.794697   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.794704   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:40.794710   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:40.794756   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:40.836274   57719 cri.go:89] found id: ""
	I0410 22:49:40.836308   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.836319   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:40.836327   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:40.836408   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:40.882249   57719 cri.go:89] found id: ""
	I0410 22:49:40.882276   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.882285   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:40.882292   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:40.882357   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:40.925829   57719 cri.go:89] found id: ""
	I0410 22:49:40.925867   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.925878   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:40.925885   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:40.925936   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:40.978494   57719 cri.go:89] found id: ""
	I0410 22:49:40.978529   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.978540   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:40.978547   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:40.978611   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:41.020935   57719 cri.go:89] found id: ""
	I0410 22:49:41.020964   57719 logs.go:276] 0 containers: []
	W0410 22:49:41.020975   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:41.020982   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:41.021040   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:41.060779   57719 cri.go:89] found id: ""
	I0410 22:49:41.060812   57719 logs.go:276] 0 containers: []
	W0410 22:49:41.060824   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:41.060831   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:41.060885   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:41.119604   57719 cri.go:89] found id: ""
	I0410 22:49:41.119632   57719 logs.go:276] 0 containers: []
	W0410 22:49:41.119643   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:41.119653   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:41.119667   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:41.188739   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:41.188774   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:41.203682   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:41.203735   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:41.293423   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:41.293451   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:41.293468   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:41.366606   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:41.366649   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:43.914447   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:43.930350   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:43.930439   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:43.968867   57719 cri.go:89] found id: ""
	I0410 22:49:43.968921   57719 logs.go:276] 0 containers: []
	W0410 22:49:43.968932   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:43.968939   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:43.969012   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:44.010143   57719 cri.go:89] found id: ""
	I0410 22:49:44.010169   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.010181   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:44.010188   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:44.010264   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:44.048610   57719 cri.go:89] found id: ""
	I0410 22:49:44.048637   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.048645   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:44.048651   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:44.048697   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:44.105939   57719 cri.go:89] found id: ""
	I0410 22:49:44.105973   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.106001   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:44.106009   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:44.106086   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:44.149699   57719 cri.go:89] found id: ""
	I0410 22:49:44.149726   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.149735   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:44.149743   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:44.149803   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:44.193131   57719 cri.go:89] found id: ""
	I0410 22:49:44.193159   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.193167   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:44.193173   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:44.193255   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:44.233751   57719 cri.go:89] found id: ""
	I0410 22:49:44.233781   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.233789   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:44.233801   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:44.233868   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:44.284404   57719 cri.go:89] found id: ""
	I0410 22:49:44.284432   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.284441   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:44.284449   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:44.284461   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:44.330082   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:44.330118   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:44.383452   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:44.383487   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:44.399604   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:44.399632   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:44.476328   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:44.476368   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:44.476415   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:43.019922   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:45.519954   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:43.650596   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:45.651668   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:45.537183   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0410 22:49:45.537228   57270 cache_images.go:123] Successfully loaded all cached images
	I0410 22:49:45.537235   57270 cache_images.go:92] duration metric: took 16.68459637s to LoadCachedImages
	I0410 22:49:45.537249   57270 kubeadm.go:928] updating node { 192.168.50.17 8443 v1.30.0-rc.1 crio true true} ...
	I0410 22:49:45.537401   57270 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-646133 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.1 ClusterName:no-preload-646133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 22:49:45.537476   57270 ssh_runner.go:195] Run: crio config
	I0410 22:49:45.587002   57270 cni.go:84] Creating CNI manager for ""
	I0410 22:49:45.587031   57270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:49:45.587047   57270 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 22:49:45.587069   57270 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.17 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-646133 NodeName:no-preload-646133 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.17"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.17 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0410 22:49:45.587205   57270 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.17
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-646133"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.17
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.17"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 22:49:45.587272   57270 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.1
	I0410 22:49:45.600694   57270 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 22:49:45.600758   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 22:49:45.613884   57270 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0410 22:49:45.633871   57270 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0410 22:49:45.654733   57270 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0410 22:49:45.673976   57270 ssh_runner.go:195] Run: grep 192.168.50.17	control-plane.minikube.internal$ /etc/hosts
	I0410 22:49:45.678260   57270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.17	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:49:45.693499   57270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:49:45.819034   57270 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:49:45.838775   57270 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133 for IP: 192.168.50.17
	I0410 22:49:45.838799   57270 certs.go:194] generating shared ca certs ...
	I0410 22:49:45.838819   57270 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:49:45.839010   57270 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 22:49:45.839064   57270 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 22:49:45.839078   57270 certs.go:256] generating profile certs ...
	I0410 22:49:45.839175   57270 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/client.key
	I0410 22:49:45.839256   57270 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/apiserver.key.d257fb06
	I0410 22:49:45.839310   57270 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/proxy-client.key
	I0410 22:49:45.839480   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 22:49:45.839521   57270 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 22:49:45.839531   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 22:49:45.839551   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 22:49:45.839608   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 22:49:45.839633   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 22:49:45.839674   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:49:45.840315   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 22:49:45.897688   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 22:49:45.932242   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 22:49:45.979537   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 22:49:46.020562   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0410 22:49:46.057254   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0410 22:49:46.084070   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 22:49:46.112807   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0410 22:49:46.141650   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 22:49:46.170167   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 22:49:46.196917   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 22:49:46.222645   57270 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 22:49:46.242626   57270 ssh_runner.go:195] Run: openssl version
	I0410 22:49:46.249048   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 22:49:46.265110   57270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:46.270018   57270 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:46.270083   57270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:46.276298   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 22:49:46.288165   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 22:49:46.299040   57270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 22:49:46.303584   57270 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:49:46.303627   57270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 22:49:46.309278   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 22:49:46.319990   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 22:49:46.331654   57270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 22:49:46.336700   57270 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:49:46.336750   57270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 22:49:46.342767   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 22:49:46.355005   57270 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:49:46.359870   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0410 22:49:46.366270   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0410 22:49:46.372625   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0410 22:49:46.379270   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0410 22:49:46.386312   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0410 22:49:46.392796   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0410 22:49:46.399209   57270 kubeadm.go:391] StartCluster: {Name:no-preload-646133 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.1 ClusterName:no-preload-646133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.17 Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:49:46.399318   57270 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 22:49:46.399405   57270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:49:46.439061   57270 cri.go:89] found id: ""
	I0410 22:49:46.439149   57270 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0410 22:49:46.450243   57270 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0410 22:49:46.450265   57270 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0410 22:49:46.450271   57270 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0410 22:49:46.450323   57270 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0410 22:49:46.460553   57270 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0410 22:49:46.461608   57270 kubeconfig.go:125] found "no-preload-646133" server: "https://192.168.50.17:8443"
	I0410 22:49:46.464469   57270 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0410 22:49:46.474775   57270 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.17
	I0410 22:49:46.474808   57270 kubeadm.go:1154] stopping kube-system containers ...
	I0410 22:49:46.474820   57270 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0410 22:49:46.474860   57270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:49:46.514933   57270 cri.go:89] found id: ""
	I0410 22:49:46.515010   57270 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0410 22:49:46.533830   57270 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:49:46.547026   57270 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:49:46.547042   57270 kubeadm.go:156] found existing configuration files:
	
	I0410 22:49:46.547081   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:49:46.557093   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:49:46.557157   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:49:46.567102   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:49:46.576939   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:49:46.576998   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:49:46.586921   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:49:46.596189   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:49:46.596260   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:49:46.607803   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:49:46.618166   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:49:46.618240   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:49:46.628406   57270 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:49:46.638748   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:46.767824   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:48.028868   57270 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.261006059s)
	I0410 22:49:48.028907   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:48.253185   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:48.323164   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:48.404069   57270 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:49:48.404153   57270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:48.904557   57270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:49.404477   57270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:49.437891   57270 api_server.go:72] duration metric: took 1.033818826s to wait for apiserver process to appear ...
	I0410 22:49:49.437927   57270 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:49:49.437953   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:49.438623   57270 api_server.go:269] stopped: https://192.168.50.17:8443/healthz: Get "https://192.168.50.17:8443/healthz": dial tcp 192.168.50.17:8443: connect: connection refused
	I0410 22:49:47.054122   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:47.069583   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:47.069654   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:47.113953   57719 cri.go:89] found id: ""
	I0410 22:49:47.113981   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.113989   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:47.113995   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:47.114054   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:47.156770   57719 cri.go:89] found id: ""
	I0410 22:49:47.156798   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.156808   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:47.156814   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:47.156891   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:47.195227   57719 cri.go:89] found id: ""
	I0410 22:49:47.195252   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.195261   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:47.195266   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:47.195328   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:47.238109   57719 cri.go:89] found id: ""
	I0410 22:49:47.238138   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.238150   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:47.238157   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:47.238212   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:47.285062   57719 cri.go:89] found id: ""
	I0410 22:49:47.285093   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.285101   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:47.285108   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:47.285185   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:47.324635   57719 cri.go:89] found id: ""
	I0410 22:49:47.324663   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.324670   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:47.324676   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:47.324744   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:47.365404   57719 cri.go:89] found id: ""
	I0410 22:49:47.365437   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.365445   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:47.365468   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:47.365535   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:47.412296   57719 cri.go:89] found id: ""
	I0410 22:49:47.412335   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.412346   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:47.412367   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:47.412384   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:47.497998   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:47.498019   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:47.498033   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:47.590502   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:47.590536   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:47.647665   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:47.647692   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:47.697704   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:47.697741   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:50.213410   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:50.229408   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:50.229488   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:50.268514   57719 cri.go:89] found id: ""
	I0410 22:49:50.268545   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.268556   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:50.268563   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:50.268620   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:50.308733   57719 cri.go:89] found id: ""
	I0410 22:49:50.308762   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.308790   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:50.308796   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:50.308857   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:50.353929   57719 cri.go:89] found id: ""
	I0410 22:49:50.353966   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.353977   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:50.353985   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:50.354043   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:50.397979   57719 cri.go:89] found id: ""
	I0410 22:49:50.398009   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.398019   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:50.398026   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:50.398086   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:47.521284   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:50.018571   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:52.020874   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:48.151768   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:50.151820   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:49.939075   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:52.355813   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0410 22:49:52.355855   57270 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0410 22:49:52.355868   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:52.502702   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0410 22:49:52.502733   57270 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0410 22:49:52.502796   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:52.509360   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0410 22:49:52.509401   57270 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0410 22:49:52.939056   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:52.946114   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0410 22:49:52.946154   57270 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0410 22:49:53.438741   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:53.444154   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0410 22:49:53.444187   57270 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0410 22:49:53.938848   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:53.947578   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 200:
	ok
	I0410 22:49:53.956247   57270 api_server.go:141] control plane version: v1.30.0-rc.1
	I0410 22:49:53.956281   57270 api_server.go:131] duration metric: took 4.518344859s to wait for apiserver health ...
	I0410 22:49:53.956292   57270 cni.go:84] Creating CNI manager for ""
	I0410 22:49:53.956301   57270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:49:53.958053   57270 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0410 22:49:53.959420   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0410 22:49:53.973242   57270 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0410 22:49:54.004623   57270 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:49:54.024138   57270 system_pods.go:59] 8 kube-system pods found
	I0410 22:49:54.024185   57270 system_pods.go:61] "coredns-7db6d8ff4d-lbcp6" [1ff36529-d718-41e7-9b61-54ba32efab0e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0410 22:49:54.024195   57270 system_pods.go:61] "etcd-no-preload-646133" [a704a953-1418-4425-8ac1-272c632050c7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0410 22:49:54.024214   57270 system_pods.go:61] "kube-apiserver-no-preload-646133" [90d4ff18-767c-4dbf-b4ad-ff02cb3d542f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0410 22:49:54.024231   57270 system_pods.go:61] "kube-controller-manager-no-preload-646133" [82c0778e-690f-41a6-a57f-017ab79fd029] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0410 22:49:54.024243   57270 system_pods.go:61] "kube-proxy-v5fbl" [002efd18-4375-455b-9b4a-15bb739120e0] Running
	I0410 22:49:54.024252   57270 system_pods.go:61] "kube-scheduler-no-preload-646133" [fa9898bc-36a6-4cc4-91e6-bba4ccd22d9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0410 22:49:54.024264   57270 system_pods.go:61] "metrics-server-569cc877fc-pw276" [22de5c2f-13ab-4f69-8eb6-ec4a3c3d1e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:49:54.024277   57270 system_pods.go:61] "storage-provisioner" [1028921e-3924-4614-bcb6-f949c18e9e4e] Running
	I0410 22:49:54.024287   57270 system_pods.go:74] duration metric: took 19.638409ms to wait for pod list to return data ...
	I0410 22:49:54.024301   57270 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:49:54.031666   57270 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:49:54.031694   57270 node_conditions.go:123] node cpu capacity is 2
	I0410 22:49:54.031705   57270 node_conditions.go:105] duration metric: took 7.394201ms to run NodePressure ...
	I0410 22:49:54.031720   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:54.339352   57270 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0410 22:49:54.345115   57270 kubeadm.go:733] kubelet initialised
	I0410 22:49:54.345146   57270 kubeadm.go:734] duration metric: took 5.76519ms waiting for restarted kubelet to initialise ...
	I0410 22:49:54.345156   57270 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:49:54.352254   57270 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-lbcp6" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:50.436191   57719 cri.go:89] found id: ""
	I0410 22:49:50.436222   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.436234   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:50.436241   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:50.436316   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:50.476462   57719 cri.go:89] found id: ""
	I0410 22:49:50.476486   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.476494   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:50.476499   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:50.476557   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:50.520025   57719 cri.go:89] found id: ""
	I0410 22:49:50.520054   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.520063   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:50.520071   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:50.520127   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:50.564535   57719 cri.go:89] found id: ""
	I0410 22:49:50.564570   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.564581   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:50.564593   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:50.564624   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:50.620587   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:50.620629   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:50.634802   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:50.634832   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:50.707625   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:50.707655   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:50.707671   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:50.791935   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:50.791970   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:53.339109   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:53.361555   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:53.361632   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:53.428170   57719 cri.go:89] found id: ""
	I0410 22:49:53.428202   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.428212   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:53.428219   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:53.428281   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:53.501929   57719 cri.go:89] found id: ""
	I0410 22:49:53.501957   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.501968   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:53.501977   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:53.502055   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:53.548844   57719 cri.go:89] found id: ""
	I0410 22:49:53.548871   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.548890   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:53.548897   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:53.548949   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:53.595056   57719 cri.go:89] found id: ""
	I0410 22:49:53.595081   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.595090   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:53.595098   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:53.595153   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:53.638885   57719 cri.go:89] found id: ""
	I0410 22:49:53.638920   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.638938   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:53.638946   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:53.639046   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:53.685526   57719 cri.go:89] found id: ""
	I0410 22:49:53.685565   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.685573   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:53.685579   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:53.685650   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:53.725084   57719 cri.go:89] found id: ""
	I0410 22:49:53.725112   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.725119   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:53.725125   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:53.725172   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:53.767031   57719 cri.go:89] found id: ""
	I0410 22:49:53.767062   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.767072   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:53.767083   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:53.767103   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:53.826570   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:53.826618   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:53.843784   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:53.843822   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:53.926277   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:53.926299   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:53.926317   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:54.024735   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:54.024782   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:54.519305   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:56.520139   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:52.651382   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:55.149798   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:57.150803   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:56.359479   57270 pod_ready.go:102] pod "coredns-7db6d8ff4d-lbcp6" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:58.859341   57270 pod_ready.go:102] pod "coredns-7db6d8ff4d-lbcp6" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:56.586265   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:56.602113   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:56.602200   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:56.647041   57719 cri.go:89] found id: ""
	I0410 22:49:56.647074   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.647086   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:56.647094   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:56.647168   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:56.688053   57719 cri.go:89] found id: ""
	I0410 22:49:56.688086   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.688096   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:56.688104   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:56.688190   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:56.729176   57719 cri.go:89] found id: ""
	I0410 22:49:56.729210   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.729221   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:56.729229   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:56.729293   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:56.768877   57719 cri.go:89] found id: ""
	I0410 22:49:56.768905   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.768913   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:56.768919   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:56.768966   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:56.807228   57719 cri.go:89] found id: ""
	I0410 22:49:56.807274   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.807286   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:56.807294   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:56.807361   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:56.848183   57719 cri.go:89] found id: ""
	I0410 22:49:56.848216   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.848224   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:56.848230   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:56.848284   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:56.887894   57719 cri.go:89] found id: ""
	I0410 22:49:56.887923   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.887931   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:56.887937   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:56.887993   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:56.926908   57719 cri.go:89] found id: ""
	I0410 22:49:56.926935   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.926944   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:56.926952   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:56.926968   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:57.012614   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:57.012640   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:57.012657   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:57.098735   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:57.098784   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:57.140798   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:57.140831   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:57.204239   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:57.204283   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:59.720328   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:59.735964   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:59.736042   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:59.774351   57719 cri.go:89] found id: ""
	I0410 22:49:59.774383   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.774393   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:59.774407   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:59.774468   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:59.817222   57719 cri.go:89] found id: ""
	I0410 22:49:59.817248   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.817255   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:59.817260   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:59.817310   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:59.854551   57719 cri.go:89] found id: ""
	I0410 22:49:59.854582   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.854594   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:59.854602   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:59.854656   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:59.894334   57719 cri.go:89] found id: ""
	I0410 22:49:59.894367   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.894375   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:59.894381   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:59.894442   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:59.932446   57719 cri.go:89] found id: ""
	I0410 22:49:59.932472   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.932482   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:59.932489   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:59.932552   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:59.969168   57719 cri.go:89] found id: ""
	I0410 22:49:59.969193   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.969201   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:59.969209   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:59.969273   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:00.006918   57719 cri.go:89] found id: ""
	I0410 22:50:00.006960   57719 logs.go:276] 0 containers: []
	W0410 22:50:00.006972   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:00.006979   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:00.007036   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:00.050380   57719 cri.go:89] found id: ""
	I0410 22:50:00.050411   57719 logs.go:276] 0 containers: []
	W0410 22:50:00.050424   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:00.050433   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:00.050454   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:00.066340   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:00.066366   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:00.146454   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:00.146479   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:00.146494   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:00.231174   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:00.231225   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:00.278732   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:00.278759   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:59.020938   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:01.518584   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:59.151137   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:01.650307   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:01.359992   57270 pod_ready.go:92] pod "coredns-7db6d8ff4d-lbcp6" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:01.360021   57270 pod_ready.go:81] duration metric: took 7.007734788s for pod "coredns-7db6d8ff4d-lbcp6" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:01.360035   57270 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:02.867322   57270 pod_ready.go:92] pod "etcd-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:02.867349   57270 pod_ready.go:81] duration metric: took 1.507305949s for pod "etcd-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:02.867362   57270 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:02.833035   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:02.847316   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:02.847380   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:02.888793   57719 cri.go:89] found id: ""
	I0410 22:50:02.888821   57719 logs.go:276] 0 containers: []
	W0410 22:50:02.888832   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:02.888840   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:02.888897   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:02.926495   57719 cri.go:89] found id: ""
	I0410 22:50:02.926525   57719 logs.go:276] 0 containers: []
	W0410 22:50:02.926535   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:02.926542   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:02.926603   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:02.966185   57719 cri.go:89] found id: ""
	I0410 22:50:02.966217   57719 logs.go:276] 0 containers: []
	W0410 22:50:02.966227   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:02.966233   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:02.966295   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:03.007383   57719 cri.go:89] found id: ""
	I0410 22:50:03.007408   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.007414   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:03.007420   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:03.007490   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:03.044245   57719 cri.go:89] found id: ""
	I0410 22:50:03.044273   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.044281   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:03.044292   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:03.044367   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:03.078820   57719 cri.go:89] found id: ""
	I0410 22:50:03.078849   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.078859   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:03.078866   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:03.078927   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:03.117205   57719 cri.go:89] found id: ""
	I0410 22:50:03.117233   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.117244   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:03.117251   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:03.117313   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:03.155698   57719 cri.go:89] found id: ""
	I0410 22:50:03.155725   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.155735   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:03.155743   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:03.155758   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:03.231685   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:03.231712   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:03.231724   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:03.315122   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:03.315167   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:03.361151   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:03.361186   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:03.412134   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:03.412168   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:04.017523   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:06.024382   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:04.150291   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:06.151488   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:04.873656   57270 pod_ready.go:102] pod "kube-apiserver-no-preload-646133" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:05.874079   57270 pod_ready.go:92] pod "kube-apiserver-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:05.874106   57270 pod_ready.go:81] duration metric: took 3.006735064s for pod "kube-apiserver-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:05.874116   57270 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:07.880447   57270 pod_ready.go:102] pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:08.881209   57270 pod_ready.go:92] pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:08.881241   57270 pod_ready.go:81] duration metric: took 3.007117254s for pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:08.881271   57270 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v5fbl" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:08.887939   57270 pod_ready.go:92] pod "kube-proxy-v5fbl" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:08.887963   57270 pod_ready.go:81] duration metric: took 6.68304ms for pod "kube-proxy-v5fbl" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:08.887975   57270 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:08.894389   57270 pod_ready.go:92] pod "kube-scheduler-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:08.894415   57270 pod_ready.go:81] duration metric: took 6.43215ms for pod "kube-scheduler-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:08.894428   57270 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:05.928116   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:05.942237   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:05.942337   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:05.983813   57719 cri.go:89] found id: ""
	I0410 22:50:05.983842   57719 logs.go:276] 0 containers: []
	W0410 22:50:05.983853   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:05.983861   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:05.983945   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:06.024590   57719 cri.go:89] found id: ""
	I0410 22:50:06.024618   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.024626   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:06.024637   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:06.024698   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:06.063040   57719 cri.go:89] found id: ""
	I0410 22:50:06.063075   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.063087   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:06.063094   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:06.063160   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:06.102224   57719 cri.go:89] found id: ""
	I0410 22:50:06.102250   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.102259   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:06.102273   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:06.102342   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:06.144202   57719 cri.go:89] found id: ""
	I0410 22:50:06.144229   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.144236   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:06.144242   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:06.144288   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:06.189215   57719 cri.go:89] found id: ""
	I0410 22:50:06.189243   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.189250   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:06.189256   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:06.189308   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:06.225218   57719 cri.go:89] found id: ""
	I0410 22:50:06.225247   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.225258   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:06.225266   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:06.225330   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:06.265229   57719 cri.go:89] found id: ""
	I0410 22:50:06.265262   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.265273   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:06.265283   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:06.265306   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:06.279794   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:06.279825   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:06.348038   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:06.348063   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:06.348079   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:06.431293   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:06.431339   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:06.476033   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:06.476060   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:09.032099   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:09.046628   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:09.046765   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:09.086900   57719 cri.go:89] found id: ""
	I0410 22:50:09.086928   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.086936   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:09.086942   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:09.086998   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:09.124989   57719 cri.go:89] found id: ""
	I0410 22:50:09.125018   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.125028   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:09.125035   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:09.125096   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:09.163720   57719 cri.go:89] found id: ""
	I0410 22:50:09.163749   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.163761   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:09.163769   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:09.163822   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:09.203846   57719 cri.go:89] found id: ""
	I0410 22:50:09.203875   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.203883   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:09.203888   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:09.203945   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:09.242974   57719 cri.go:89] found id: ""
	I0410 22:50:09.243002   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.243016   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:09.243024   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:09.243092   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:09.278664   57719 cri.go:89] found id: ""
	I0410 22:50:09.278687   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.278694   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:09.278700   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:09.278762   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:09.313335   57719 cri.go:89] found id: ""
	I0410 22:50:09.313359   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.313367   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:09.313372   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:09.313419   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:09.351160   57719 cri.go:89] found id: ""
	I0410 22:50:09.351195   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.351206   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:09.351225   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:09.351239   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:09.425989   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:09.426015   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:09.426033   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:09.505189   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:09.505223   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:09.549619   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:09.549651   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:09.604322   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:09.604360   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:08.520115   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:11.018253   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:08.649190   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:10.650453   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:10.903726   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:13.401154   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:12.119780   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:12.135377   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:12.135458   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:12.178105   57719 cri.go:89] found id: ""
	I0410 22:50:12.178129   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.178138   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:12.178144   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:12.178207   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:12.217369   57719 cri.go:89] found id: ""
	I0410 22:50:12.217397   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.217409   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:12.217424   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:12.217488   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:12.254185   57719 cri.go:89] found id: ""
	I0410 22:50:12.254213   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.254222   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:12.254230   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:12.254291   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:12.295007   57719 cri.go:89] found id: ""
	I0410 22:50:12.295038   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.295048   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:12.295057   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:12.295125   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:12.334620   57719 cri.go:89] found id: ""
	I0410 22:50:12.334644   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.334651   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:12.334657   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:12.334707   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:12.371217   57719 cri.go:89] found id: ""
	I0410 22:50:12.371241   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.371249   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:12.371255   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:12.371302   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:12.409571   57719 cri.go:89] found id: ""
	I0410 22:50:12.409599   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.409608   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:12.409617   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:12.409675   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:12.453133   57719 cri.go:89] found id: ""
	I0410 22:50:12.453159   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.453169   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:12.453180   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:12.453194   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:12.505322   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:12.505360   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:12.520284   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:12.520315   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:12.608057   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:12.608082   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:12.608097   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:12.693240   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:12.693274   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:15.244628   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:15.261915   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:15.262020   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:15.302874   57719 cri.go:89] found id: ""
	I0410 22:50:15.302903   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.302910   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:15.302916   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:15.302973   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:15.347492   57719 cri.go:89] found id: ""
	I0410 22:50:15.347518   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.347527   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:15.347534   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:15.347598   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:15.394156   57719 cri.go:89] found id: ""
	I0410 22:50:15.394188   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.394198   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:15.394205   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:15.394265   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:13.518316   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:15.520507   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:13.150145   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:15.651083   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:15.401582   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:17.901179   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:15.437656   57719 cri.go:89] found id: ""
	I0410 22:50:15.437682   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.437690   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:15.437695   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:15.437748   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:15.475658   57719 cri.go:89] found id: ""
	I0410 22:50:15.475686   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.475697   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:15.475704   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:15.475765   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:15.517908   57719 cri.go:89] found id: ""
	I0410 22:50:15.517930   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.517937   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:15.517942   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:15.517991   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:15.560083   57719 cri.go:89] found id: ""
	I0410 22:50:15.560108   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.560117   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:15.560123   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:15.560178   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:15.603967   57719 cri.go:89] found id: ""
	I0410 22:50:15.603994   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.604002   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:15.604013   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:15.604028   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:15.659994   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:15.660029   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:15.675627   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:15.675658   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:15.761297   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:15.761320   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:15.761339   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:15.839225   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:15.839265   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:18.386062   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:18.399609   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:18.399677   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:18.443002   57719 cri.go:89] found id: ""
	I0410 22:50:18.443030   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.443040   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:18.443048   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:18.443106   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:18.485089   57719 cri.go:89] found id: ""
	I0410 22:50:18.485121   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.485132   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:18.485140   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:18.485200   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:18.524310   57719 cri.go:89] found id: ""
	I0410 22:50:18.524338   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.524347   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:18.524354   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:18.524412   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:18.563535   57719 cri.go:89] found id: ""
	I0410 22:50:18.563573   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.563582   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:18.563587   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:18.563634   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:18.600451   57719 cri.go:89] found id: ""
	I0410 22:50:18.600478   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.600487   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:18.600495   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:18.600562   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:18.640445   57719 cri.go:89] found id: ""
	I0410 22:50:18.640472   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.640480   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:18.640485   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:18.640550   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:18.677691   57719 cri.go:89] found id: ""
	I0410 22:50:18.677725   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.677746   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:18.677754   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:18.677817   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:18.716753   57719 cri.go:89] found id: ""
	I0410 22:50:18.716850   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.716876   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:18.716897   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:18.716918   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:18.804099   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:18.804130   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:18.804144   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:18.883569   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:18.883611   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:18.930014   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:18.930045   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:18.980029   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:18.980065   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:18.018924   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:20.020820   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:18.151029   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:20.650000   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:19.904069   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:22.401462   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:24.401892   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:21.495499   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:21.511001   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:21.511075   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:21.551469   57719 cri.go:89] found id: ""
	I0410 22:50:21.551511   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.551522   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:21.551540   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:21.551605   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:21.590539   57719 cri.go:89] found id: ""
	I0410 22:50:21.590570   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.590580   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:21.590587   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:21.590654   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:21.629005   57719 cri.go:89] found id: ""
	I0410 22:50:21.629030   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.629042   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:21.629048   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:21.629108   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:21.669745   57719 cri.go:89] found id: ""
	I0410 22:50:21.669767   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.669774   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:21.669780   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:21.669834   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:21.707806   57719 cri.go:89] found id: ""
	I0410 22:50:21.707831   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.707839   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:21.707844   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:21.707892   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:21.746698   57719 cri.go:89] found id: ""
	I0410 22:50:21.746727   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.746736   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:21.746742   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:21.746802   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:21.783048   57719 cri.go:89] found id: ""
	I0410 22:50:21.783070   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.783079   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:21.783084   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:21.783131   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:21.822457   57719 cri.go:89] found id: ""
	I0410 22:50:21.822484   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.822492   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:21.822500   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:21.822513   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:21.894706   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:21.894747   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:21.909861   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:21.909903   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:21.999344   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:21.999370   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:21.999386   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:22.080004   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:22.080042   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:24.620924   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:24.634937   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:24.634999   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:24.686619   57719 cri.go:89] found id: ""
	I0410 22:50:24.686644   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.686655   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:24.686662   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:24.686744   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:24.723632   57719 cri.go:89] found id: ""
	I0410 22:50:24.723658   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.723667   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:24.723675   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:24.723738   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:24.760708   57719 cri.go:89] found id: ""
	I0410 22:50:24.760739   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.760750   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:24.760757   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:24.760804   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:24.795680   57719 cri.go:89] found id: ""
	I0410 22:50:24.795712   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.795722   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:24.795729   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:24.795793   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:24.833033   57719 cri.go:89] found id: ""
	I0410 22:50:24.833063   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.833074   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:24.833082   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:24.833130   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:24.872840   57719 cri.go:89] found id: ""
	I0410 22:50:24.872864   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.872871   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:24.872877   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:24.872936   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:24.915640   57719 cri.go:89] found id: ""
	I0410 22:50:24.915678   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.915688   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:24.915696   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:24.915755   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:24.957164   57719 cri.go:89] found id: ""
	I0410 22:50:24.957207   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.957219   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:24.957230   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:24.957244   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:25.006551   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:25.006601   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:25.021623   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:25.021649   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:25.094699   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:25.094722   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:25.094741   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:25.181280   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:25.181316   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:22.518442   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:25.018206   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:22.650481   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:25.151162   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:26.904127   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:29.400642   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:27.723475   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:27.737294   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:27.737381   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:27.776098   57719 cri.go:89] found id: ""
	I0410 22:50:27.776126   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.776138   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:27.776146   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:27.776203   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:27.814324   57719 cri.go:89] found id: ""
	I0410 22:50:27.814352   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.814364   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:27.814371   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:27.814447   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:27.849573   57719 cri.go:89] found id: ""
	I0410 22:50:27.849603   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.849614   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:27.849621   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:27.849682   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:27.888904   57719 cri.go:89] found id: ""
	I0410 22:50:27.888932   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.888940   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:27.888946   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:27.888993   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:27.931772   57719 cri.go:89] found id: ""
	I0410 22:50:27.931800   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.931812   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:27.931821   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:27.931881   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:27.975633   57719 cri.go:89] found id: ""
	I0410 22:50:27.975666   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.975676   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:27.975684   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:27.975736   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:28.012251   57719 cri.go:89] found id: ""
	I0410 22:50:28.012280   57719 logs.go:276] 0 containers: []
	W0410 22:50:28.012290   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:28.012298   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:28.012364   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:28.048848   57719 cri.go:89] found id: ""
	I0410 22:50:28.048886   57719 logs.go:276] 0 containers: []
	W0410 22:50:28.048898   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:28.048908   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:28.048923   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:28.102215   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:28.102257   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:28.118052   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:28.118081   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:28.190738   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:28.190762   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:28.190777   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:28.269294   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:28.269330   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:27.519211   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:29.521111   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:32.017915   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:27.651922   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:30.150852   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:31.401210   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:33.902054   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:30.833927   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:30.848196   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:30.848266   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:30.886077   57719 cri.go:89] found id: ""
	I0410 22:50:30.886117   57719 logs.go:276] 0 containers: []
	W0410 22:50:30.886127   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:30.886133   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:30.886179   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:30.924638   57719 cri.go:89] found id: ""
	I0410 22:50:30.924668   57719 logs.go:276] 0 containers: []
	W0410 22:50:30.924678   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:30.924686   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:30.924762   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:30.961106   57719 cri.go:89] found id: ""
	I0410 22:50:30.961136   57719 logs.go:276] 0 containers: []
	W0410 22:50:30.961147   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:30.961154   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:30.961213   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:31.001374   57719 cri.go:89] found id: ""
	I0410 22:50:31.001412   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.001427   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:31.001434   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:31.001498   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:31.038928   57719 cri.go:89] found id: ""
	I0410 22:50:31.038961   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.038971   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:31.038980   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:31.039057   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:31.077033   57719 cri.go:89] found id: ""
	I0410 22:50:31.077067   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.077076   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:31.077083   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:31.077139   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:31.115227   57719 cri.go:89] found id: ""
	I0410 22:50:31.115257   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.115266   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:31.115273   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:31.115335   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:31.157339   57719 cri.go:89] found id: ""
	I0410 22:50:31.157372   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.157382   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:31.157393   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:31.157409   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:31.198742   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:31.198770   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:31.255388   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:31.255422   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:31.272018   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:31.272048   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:31.344503   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:31.344524   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:31.344541   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:33.925749   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:33.939402   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:33.939475   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:33.976070   57719 cri.go:89] found id: ""
	I0410 22:50:33.976093   57719 logs.go:276] 0 containers: []
	W0410 22:50:33.976100   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:33.976106   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:33.976172   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:34.013723   57719 cri.go:89] found id: ""
	I0410 22:50:34.013748   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.013758   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:34.013765   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:34.013821   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:34.062678   57719 cri.go:89] found id: ""
	I0410 22:50:34.062704   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.062712   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:34.062718   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:34.062774   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:34.123007   57719 cri.go:89] found id: ""
	I0410 22:50:34.123038   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.123046   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:34.123052   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:34.123096   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:34.188811   57719 cri.go:89] found id: ""
	I0410 22:50:34.188841   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.188852   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:34.188859   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:34.188949   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:34.223585   57719 cri.go:89] found id: ""
	I0410 22:50:34.223609   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.223618   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:34.223625   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:34.223680   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:34.260004   57719 cri.go:89] found id: ""
	I0410 22:50:34.260028   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.260036   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:34.260041   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:34.260096   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:34.303064   57719 cri.go:89] found id: ""
	I0410 22:50:34.303093   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.303104   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:34.303115   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:34.303134   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:34.359105   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:34.359142   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:34.375420   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:34.375450   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:34.449619   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:34.449645   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:34.449660   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:34.534214   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:34.534248   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:34.518609   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:37.016973   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:32.649917   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:34.661652   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:37.150648   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:36.401988   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:38.901505   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:37.076525   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:37.090789   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:37.090849   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:37.130848   57719 cri.go:89] found id: ""
	I0410 22:50:37.130881   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.130893   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:37.130900   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:37.130967   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:37.170158   57719 cri.go:89] found id: ""
	I0410 22:50:37.170181   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.170188   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:37.170194   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:37.170269   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:37.210238   57719 cri.go:89] found id: ""
	I0410 22:50:37.210264   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.210274   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:37.210282   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:37.210328   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:37.256763   57719 cri.go:89] found id: ""
	I0410 22:50:37.256789   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.256800   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:37.256807   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:37.256875   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:37.295323   57719 cri.go:89] found id: ""
	I0410 22:50:37.295355   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.295364   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:37.295372   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:37.295443   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:37.334066   57719 cri.go:89] found id: ""
	I0410 22:50:37.334094   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.334105   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:37.334113   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:37.334170   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:37.374428   57719 cri.go:89] found id: ""
	I0410 22:50:37.374458   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.374477   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:37.374485   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:37.374544   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:37.412114   57719 cri.go:89] found id: ""
	I0410 22:50:37.412142   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.412152   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:37.412161   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:37.412174   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:37.453693   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:37.453717   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:37.505484   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:37.505524   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:37.523645   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:37.523672   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:37.595107   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:37.595134   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:37.595150   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:40.180649   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:40.195168   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:40.195243   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:40.240130   57719 cri.go:89] found id: ""
	I0410 22:50:40.240160   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.240169   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:40.240175   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:40.240241   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:40.276366   57719 cri.go:89] found id: ""
	I0410 22:50:40.276390   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.276406   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:40.276412   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:40.276466   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:40.314991   57719 cri.go:89] found id: ""
	I0410 22:50:40.315016   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.315023   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:40.315029   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:40.315075   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:40.354301   57719 cri.go:89] found id: ""
	I0410 22:50:40.354331   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.354342   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:40.354349   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:40.354414   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:40.393093   57719 cri.go:89] found id: ""
	I0410 22:50:40.393125   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.393135   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:40.393143   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:40.393204   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:39.021170   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:41.518285   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:39.650047   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:42.150206   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:40.902024   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:42.904180   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:40.429641   57719 cri.go:89] found id: ""
	I0410 22:50:40.429665   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.429674   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:40.429680   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:40.429727   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:40.468184   57719 cri.go:89] found id: ""
	I0410 22:50:40.468213   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.468224   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:40.468232   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:40.468304   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:40.505586   57719 cri.go:89] found id: ""
	I0410 22:50:40.505616   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.505627   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:40.505637   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:40.505652   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:40.562078   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:40.562119   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:40.578135   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:40.578213   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:40.659018   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:40.659047   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:40.659061   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:40.746434   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:40.746478   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:43.287852   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:43.301797   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:43.301869   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:43.339778   57719 cri.go:89] found id: ""
	I0410 22:50:43.339813   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.339822   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:43.339829   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:43.339893   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:43.378716   57719 cri.go:89] found id: ""
	I0410 22:50:43.378748   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.378759   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:43.378767   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:43.378836   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:43.417128   57719 cri.go:89] found id: ""
	I0410 22:50:43.417152   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.417163   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:43.417171   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:43.417234   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:43.459577   57719 cri.go:89] found id: ""
	I0410 22:50:43.459608   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.459617   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:43.459623   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:43.459678   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:43.497519   57719 cri.go:89] found id: ""
	I0410 22:50:43.497551   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.497561   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:43.497566   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:43.497620   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:43.534400   57719 cri.go:89] found id: ""
	I0410 22:50:43.534433   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.534444   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:43.534451   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:43.534540   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:43.574213   57719 cri.go:89] found id: ""
	I0410 22:50:43.574242   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.574253   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:43.574283   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:43.574344   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:43.611078   57719 cri.go:89] found id: ""
	I0410 22:50:43.611106   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.611113   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:43.611121   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:43.611137   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:43.698166   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:43.698202   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:43.749368   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:43.749395   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:43.801584   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:43.801621   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:43.817012   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:43.817050   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:43.892325   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:43.518660   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:46.017804   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:44.650389   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:46.650560   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:45.401723   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:47.901852   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:46.393325   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:46.407985   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:46.408045   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:46.442704   57719 cri.go:89] found id: ""
	I0410 22:50:46.442735   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.442745   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:46.442753   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:46.442821   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:46.485582   57719 cri.go:89] found id: ""
	I0410 22:50:46.485611   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.485618   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:46.485625   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:46.485683   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:46.524199   57719 cri.go:89] found id: ""
	I0410 22:50:46.524227   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.524234   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:46.524240   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:46.524288   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:46.560655   57719 cri.go:89] found id: ""
	I0410 22:50:46.560685   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.560694   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:46.560701   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:46.560839   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:46.596617   57719 cri.go:89] found id: ""
	I0410 22:50:46.596646   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.596658   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:46.596666   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:46.596739   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:46.634316   57719 cri.go:89] found id: ""
	I0410 22:50:46.634339   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.634347   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:46.634352   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:46.634399   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:46.671466   57719 cri.go:89] found id: ""
	I0410 22:50:46.671493   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.671502   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:46.671509   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:46.671582   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:46.709228   57719 cri.go:89] found id: ""
	I0410 22:50:46.709254   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.709265   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:46.709275   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:46.709291   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:46.761329   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:46.761366   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:46.778265   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:46.778288   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:46.851092   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:46.851113   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:46.851125   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:46.929181   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:46.929223   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:49.471285   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:49.485474   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:49.485551   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:49.523799   57719 cri.go:89] found id: ""
	I0410 22:50:49.523826   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.523838   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:49.523846   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:49.523899   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:49.562102   57719 cri.go:89] found id: ""
	I0410 22:50:49.562129   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.562137   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:49.562143   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:49.562196   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:49.600182   57719 cri.go:89] found id: ""
	I0410 22:50:49.600204   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.600211   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:49.600216   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:49.600262   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:49.640002   57719 cri.go:89] found id: ""
	I0410 22:50:49.640028   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.640039   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:49.640047   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:49.640111   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:49.678815   57719 cri.go:89] found id: ""
	I0410 22:50:49.678847   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.678858   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:49.678866   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:49.678929   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:49.716933   57719 cri.go:89] found id: ""
	I0410 22:50:49.716959   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.716969   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:49.716976   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:49.717039   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:49.756018   57719 cri.go:89] found id: ""
	I0410 22:50:49.756050   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.756060   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:49.756068   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:49.756132   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:49.802066   57719 cri.go:89] found id: ""
	I0410 22:50:49.802094   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.802103   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:49.802110   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:49.802123   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:49.856363   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:49.856417   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:49.872297   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:49.872330   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:49.950152   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:49.950174   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:49.950185   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:50.031251   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:50.031291   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:48.517547   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:50.517942   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:49.150498   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:51.151491   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:50.401650   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:52.401866   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:52.574794   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:52.589052   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:52.589117   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:52.625911   57719 cri.go:89] found id: ""
	I0410 22:50:52.625941   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.625952   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:52.625960   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:52.626020   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:52.668749   57719 cri.go:89] found id: ""
	I0410 22:50:52.668773   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.668781   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:52.668787   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:52.668835   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:52.713420   57719 cri.go:89] found id: ""
	I0410 22:50:52.713447   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.713457   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:52.713473   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:52.713538   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:52.750265   57719 cri.go:89] found id: ""
	I0410 22:50:52.750294   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.750301   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:52.750307   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:52.750354   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:52.787552   57719 cri.go:89] found id: ""
	I0410 22:50:52.787586   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.787597   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:52.787604   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:52.787670   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:52.827988   57719 cri.go:89] found id: ""
	I0410 22:50:52.828013   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.828020   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:52.828026   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:52.828072   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:52.864115   57719 cri.go:89] found id: ""
	I0410 22:50:52.864144   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.864155   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:52.864161   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:52.864222   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:52.906673   57719 cri.go:89] found id: ""
	I0410 22:50:52.906702   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.906712   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:52.906723   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:52.906742   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:52.960842   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:52.960892   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:52.976084   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:52.976114   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:53.052612   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:53.052638   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:53.052656   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:53.132465   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:53.132518   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:53.018789   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:55.518169   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:53.154117   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:55.653267   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:54.903797   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:57.401445   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:55.676947   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:55.691098   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:55.691183   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:55.728711   57719 cri.go:89] found id: ""
	I0410 22:50:55.728740   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.728750   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:55.728758   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:55.728824   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:55.768540   57719 cri.go:89] found id: ""
	I0410 22:50:55.768568   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.768578   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:55.768584   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:55.768649   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:55.806901   57719 cri.go:89] found id: ""
	I0410 22:50:55.806928   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.806938   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:55.806945   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:55.807019   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:55.846777   57719 cri.go:89] found id: ""
	I0410 22:50:55.846807   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.846816   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:55.846822   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:55.846873   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:55.887143   57719 cri.go:89] found id: ""
	I0410 22:50:55.887172   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.887181   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:55.887186   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:55.887241   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:55.929008   57719 cri.go:89] found id: ""
	I0410 22:50:55.929032   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.929040   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:55.929046   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:55.929098   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:55.969496   57719 cri.go:89] found id: ""
	I0410 22:50:55.969526   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.969536   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:55.969544   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:55.969605   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:56.007786   57719 cri.go:89] found id: ""
	I0410 22:50:56.007818   57719 logs.go:276] 0 containers: []
	W0410 22:50:56.007828   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:56.007838   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:56.007854   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:56.061616   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:56.061653   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:56.078664   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:56.078689   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:56.165015   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:56.165037   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:56.165053   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:56.241928   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:56.241971   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:58.785955   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:58.799544   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:58.799604   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:58.837234   57719 cri.go:89] found id: ""
	I0410 22:50:58.837264   57719 logs.go:276] 0 containers: []
	W0410 22:50:58.837275   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:58.837283   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:58.837350   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:58.877818   57719 cri.go:89] found id: ""
	I0410 22:50:58.877854   57719 logs.go:276] 0 containers: []
	W0410 22:50:58.877861   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:58.877867   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:58.877921   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:58.919705   57719 cri.go:89] found id: ""
	I0410 22:50:58.919729   57719 logs.go:276] 0 containers: []
	W0410 22:50:58.919740   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:58.919747   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:58.919809   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:58.957995   57719 cri.go:89] found id: ""
	I0410 22:50:58.958020   57719 logs.go:276] 0 containers: []
	W0410 22:50:58.958029   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:58.958036   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:58.958091   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:58.999966   57719 cri.go:89] found id: ""
	I0410 22:50:58.999995   57719 logs.go:276] 0 containers: []
	W0410 22:50:59.000008   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:59.000016   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:59.000088   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:59.040516   57719 cri.go:89] found id: ""
	I0410 22:50:59.040541   57719 logs.go:276] 0 containers: []
	W0410 22:50:59.040552   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:59.040560   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:59.040623   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:59.078869   57719 cri.go:89] found id: ""
	I0410 22:50:59.078899   57719 logs.go:276] 0 containers: []
	W0410 22:50:59.078908   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:59.078913   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:59.078961   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:59.116637   57719 cri.go:89] found id: ""
	I0410 22:50:59.116663   57719 logs.go:276] 0 containers: []
	W0410 22:50:59.116670   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:59.116679   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:59.116697   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:59.195852   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:59.195892   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:59.243256   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:59.243282   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:59.299195   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:59.299263   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:59.314512   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:59.314537   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:59.386468   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:58.016995   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:00.018205   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:58.151543   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:00.650140   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:59.901858   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:01.902933   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:04.402128   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:01.886907   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:01.905169   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:01.905251   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:01.944154   57719 cri.go:89] found id: ""
	I0410 22:51:01.944187   57719 logs.go:276] 0 containers: []
	W0410 22:51:01.944198   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:01.944205   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:01.944268   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:01.982743   57719 cri.go:89] found id: ""
	I0410 22:51:01.982778   57719 logs.go:276] 0 containers: []
	W0410 22:51:01.982789   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:01.982797   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:01.982864   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:02.020072   57719 cri.go:89] found id: ""
	I0410 22:51:02.020094   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.020102   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:02.020159   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:02.020213   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:02.064250   57719 cri.go:89] found id: ""
	I0410 22:51:02.064273   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.064280   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:02.064286   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:02.064339   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:02.105013   57719 cri.go:89] found id: ""
	I0410 22:51:02.105045   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.105054   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:02.105060   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:02.105106   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:02.145664   57719 cri.go:89] found id: ""
	I0410 22:51:02.145689   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.145695   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:02.145701   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:02.145759   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:02.189752   57719 cri.go:89] found id: ""
	I0410 22:51:02.189831   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.189850   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:02.189857   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:02.189929   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:02.228315   57719 cri.go:89] found id: ""
	I0410 22:51:02.228347   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.228358   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:02.228374   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:02.228390   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:02.281425   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:02.281460   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:02.296003   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:02.296031   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:02.389572   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:02.389599   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:02.389613   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:02.475881   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:02.475916   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:05.022037   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:05.037242   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:05.037304   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:05.073656   57719 cri.go:89] found id: ""
	I0410 22:51:05.073687   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.073698   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:05.073705   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:05.073767   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:05.114321   57719 cri.go:89] found id: ""
	I0410 22:51:05.114348   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.114356   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:05.114361   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:05.114430   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:05.153119   57719 cri.go:89] found id: ""
	I0410 22:51:05.153156   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.153164   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:05.153170   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:05.153230   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:05.193393   57719 cri.go:89] found id: ""
	I0410 22:51:05.193420   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.193428   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:05.193433   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:05.193479   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:05.229826   57719 cri.go:89] found id: ""
	I0410 22:51:05.229853   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.229861   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:05.229867   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:05.229915   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:05.265511   57719 cri.go:89] found id: ""
	I0410 22:51:05.265544   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.265555   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:05.265562   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:05.265627   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:05.302257   57719 cri.go:89] found id: ""
	I0410 22:51:05.302287   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.302297   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:05.302305   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:05.302386   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:05.347344   57719 cri.go:89] found id: ""
	I0410 22:51:05.347372   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.347380   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:05.347388   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:05.347399   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:05.421796   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:05.421817   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:05.421829   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:02.521499   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:05.017660   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:07.017945   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:02.651104   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:05.150286   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:07.150565   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:06.402266   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:08.406456   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:05.501803   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:05.501839   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:05.549161   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:05.549195   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:05.599598   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:05.599633   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:08.115679   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:08.130273   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:08.130350   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:08.172302   57719 cri.go:89] found id: ""
	I0410 22:51:08.172328   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.172335   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:08.172342   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:08.172390   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:08.220789   57719 cri.go:89] found id: ""
	I0410 22:51:08.220812   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.220819   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:08.220825   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:08.220874   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:08.258299   57719 cri.go:89] found id: ""
	I0410 22:51:08.258328   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.258341   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:08.258349   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:08.258404   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:08.297698   57719 cri.go:89] found id: ""
	I0410 22:51:08.297726   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.297733   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:08.297739   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:08.297787   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:08.335564   57719 cri.go:89] found id: ""
	I0410 22:51:08.335595   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.335605   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:08.335613   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:08.335671   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:08.373340   57719 cri.go:89] found id: ""
	I0410 22:51:08.373367   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.373377   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:08.373384   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:08.373481   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:08.413961   57719 cri.go:89] found id: ""
	I0410 22:51:08.413984   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.413993   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:08.414001   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:08.414062   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:08.459449   57719 cri.go:89] found id: ""
	I0410 22:51:08.459481   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.459492   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:08.459505   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:08.459521   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:08.518061   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:08.518103   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:08.533653   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:08.533680   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:08.619882   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:08.619917   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:08.619932   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:08.696329   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:08.696364   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:09.518298   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:11.518877   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:09.650387   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:11.650614   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:10.902634   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:13.402009   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:11.256846   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:11.271521   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:11.271582   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:11.312829   57719 cri.go:89] found id: ""
	I0410 22:51:11.312851   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.312869   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:11.312876   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:11.312930   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:11.355183   57719 cri.go:89] found id: ""
	I0410 22:51:11.355210   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.355220   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:11.355227   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:11.355287   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:11.394345   57719 cri.go:89] found id: ""
	I0410 22:51:11.394376   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.394388   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:11.394396   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:11.394460   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:11.434128   57719 cri.go:89] found id: ""
	I0410 22:51:11.434155   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.434163   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:11.434169   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:11.434219   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:11.473160   57719 cri.go:89] found id: ""
	I0410 22:51:11.473189   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.473201   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:11.473208   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:11.473278   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:11.513782   57719 cri.go:89] found id: ""
	I0410 22:51:11.513815   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.513826   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:11.513835   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:11.513891   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:11.556057   57719 cri.go:89] found id: ""
	I0410 22:51:11.556085   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.556093   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:11.556100   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:11.556147   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:11.594557   57719 cri.go:89] found id: ""
	I0410 22:51:11.594579   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.594586   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:11.594594   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:11.594609   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:11.672795   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:11.672841   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:11.716011   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:11.716046   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:11.769372   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:11.769413   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:11.784589   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:11.784617   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:11.857051   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:14.358019   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:14.372116   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:14.372192   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:14.412020   57719 cri.go:89] found id: ""
	I0410 22:51:14.412049   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.412061   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:14.412068   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:14.412128   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:14.450317   57719 cri.go:89] found id: ""
	I0410 22:51:14.450349   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.450360   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:14.450368   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:14.450426   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:14.509080   57719 cri.go:89] found id: ""
	I0410 22:51:14.509104   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.509110   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:14.509116   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:14.509185   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:14.561540   57719 cri.go:89] found id: ""
	I0410 22:51:14.561572   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.561583   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:14.561590   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:14.561670   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:14.622498   57719 cri.go:89] found id: ""
	I0410 22:51:14.622528   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.622538   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:14.622546   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:14.622606   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:14.678451   57719 cri.go:89] found id: ""
	I0410 22:51:14.678481   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.678490   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:14.678498   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:14.678560   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:14.720264   57719 cri.go:89] found id: ""
	I0410 22:51:14.720302   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.720315   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:14.720323   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:14.720388   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:14.758039   57719 cri.go:89] found id: ""
	I0410 22:51:14.758063   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.758071   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:14.758079   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:14.758090   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:14.808111   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:14.808171   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:14.825444   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:14.825487   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:14.906859   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:14.906884   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:14.906899   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:14.995176   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:14.995225   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:14.017397   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:16.017624   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:14.149898   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:16.150320   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:15.901542   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:17.902391   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:17.541159   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:17.556679   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:17.556749   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:17.595839   57719 cri.go:89] found id: ""
	I0410 22:51:17.595869   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.595880   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:17.595895   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:17.595954   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:17.633921   57719 cri.go:89] found id: ""
	I0410 22:51:17.633947   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.633957   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:17.633964   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:17.634033   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:17.673467   57719 cri.go:89] found id: ""
	I0410 22:51:17.673493   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.673501   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:17.673507   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:17.673554   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:17.709631   57719 cri.go:89] found id: ""
	I0410 22:51:17.709660   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.709670   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:17.709679   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:17.709739   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:17.760852   57719 cri.go:89] found id: ""
	I0410 22:51:17.760880   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.760893   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:17.760908   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:17.760969   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:17.798074   57719 cri.go:89] found id: ""
	I0410 22:51:17.798099   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.798108   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:17.798117   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:17.798178   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:17.835807   57719 cri.go:89] found id: ""
	I0410 22:51:17.835839   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.835854   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:17.835863   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:17.835935   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:17.876812   57719 cri.go:89] found id: ""
	I0410 22:51:17.876846   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.876856   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:17.876868   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:17.876882   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:17.891121   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:17.891149   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:17.966241   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:17.966264   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:17.966277   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:18.042633   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:18.042667   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:18.088294   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:18.088327   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:18.518103   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:20.519397   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:18.650784   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:21.150770   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:20.403127   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:22.901329   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:20.647016   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:20.662573   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:20.662640   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:20.701147   57719 cri.go:89] found id: ""
	I0410 22:51:20.701173   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.701184   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:20.701191   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:20.701252   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:20.739005   57719 cri.go:89] found id: ""
	I0410 22:51:20.739038   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.739049   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:20.739057   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:20.739112   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:20.776335   57719 cri.go:89] found id: ""
	I0410 22:51:20.776365   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.776379   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:20.776386   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:20.776471   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:20.814755   57719 cri.go:89] found id: ""
	I0410 22:51:20.814789   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.814800   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:20.814808   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:20.814867   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:20.853872   57719 cri.go:89] found id: ""
	I0410 22:51:20.853897   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.853904   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:20.853910   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:20.853958   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:20.891616   57719 cri.go:89] found id: ""
	I0410 22:51:20.891648   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.891656   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:20.891662   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:20.891710   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:20.930285   57719 cri.go:89] found id: ""
	I0410 22:51:20.930316   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.930326   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:20.930341   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:20.930398   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:20.967857   57719 cri.go:89] found id: ""
	I0410 22:51:20.967894   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.967904   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:20.967913   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:20.967934   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:21.053166   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:21.053201   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:21.098860   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:21.098888   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:21.150395   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:21.150430   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:21.164707   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:21.164737   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:21.251010   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:23.751441   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:23.769949   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:23.770014   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:23.809652   57719 cri.go:89] found id: ""
	I0410 22:51:23.809678   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.809686   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:23.809692   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:23.809740   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:23.847331   57719 cri.go:89] found id: ""
	I0410 22:51:23.847364   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.847374   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:23.847383   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:23.847445   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:23.889459   57719 cri.go:89] found id: ""
	I0410 22:51:23.889488   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.889498   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:23.889505   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:23.889564   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:23.932683   57719 cri.go:89] found id: ""
	I0410 22:51:23.932712   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.932720   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:23.932727   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:23.932787   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:23.974161   57719 cri.go:89] found id: ""
	I0410 22:51:23.974187   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.974194   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:23.974200   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:23.974253   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:24.013058   57719 cri.go:89] found id: ""
	I0410 22:51:24.013087   57719 logs.go:276] 0 containers: []
	W0410 22:51:24.013098   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:24.013106   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:24.013169   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:24.052556   57719 cri.go:89] found id: ""
	I0410 22:51:24.052582   57719 logs.go:276] 0 containers: []
	W0410 22:51:24.052590   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:24.052596   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:24.052643   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:24.089940   57719 cri.go:89] found id: ""
	I0410 22:51:24.089967   57719 logs.go:276] 0 containers: []
	W0410 22:51:24.089974   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:24.089982   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:24.089992   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:24.133198   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:24.133226   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:24.186615   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:24.186651   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:24.200559   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:24.200586   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:24.277061   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:24.277093   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:24.277109   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:23.016887   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:25.018325   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:27.018514   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:23.650669   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:26.149198   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:24.901704   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:26.902227   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:28.902337   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:26.855354   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:26.870269   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:26.870329   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:26.910056   57719 cri.go:89] found id: ""
	I0410 22:51:26.910084   57719 logs.go:276] 0 containers: []
	W0410 22:51:26.910094   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:26.910101   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:26.910163   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:26.949646   57719 cri.go:89] found id: ""
	I0410 22:51:26.949674   57719 logs.go:276] 0 containers: []
	W0410 22:51:26.949684   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:26.949690   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:26.949759   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:26.990945   57719 cri.go:89] found id: ""
	I0410 22:51:26.990970   57719 logs.go:276] 0 containers: []
	W0410 22:51:26.990977   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:26.990984   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:26.991053   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:27.029464   57719 cri.go:89] found id: ""
	I0410 22:51:27.029491   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.029500   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:27.029505   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:27.029562   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:27.072194   57719 cri.go:89] found id: ""
	I0410 22:51:27.072235   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.072260   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:27.072270   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:27.072339   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:27.106942   57719 cri.go:89] found id: ""
	I0410 22:51:27.106969   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.106979   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:27.106985   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:27.107045   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:27.144851   57719 cri.go:89] found id: ""
	I0410 22:51:27.144885   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.144894   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:27.144909   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:27.144970   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:27.188138   57719 cri.go:89] found id: ""
	I0410 22:51:27.188166   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.188178   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:27.188189   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:27.188204   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:27.241911   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:27.241943   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:27.255296   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:27.255322   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:27.327638   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:27.327663   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:27.327678   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:27.409048   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:27.409083   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:29.960093   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:29.975583   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:29.975647   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:30.018120   57719 cri.go:89] found id: ""
	I0410 22:51:30.018149   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.018159   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:30.018166   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:30.018225   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:30.055487   57719 cri.go:89] found id: ""
	I0410 22:51:30.055511   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.055518   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:30.055524   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:30.055573   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:30.093723   57719 cri.go:89] found id: ""
	I0410 22:51:30.093749   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.093756   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:30.093761   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:30.093808   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:30.138278   57719 cri.go:89] found id: ""
	I0410 22:51:30.138306   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.138317   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:30.138324   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:30.138385   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:30.174454   57719 cri.go:89] found id: ""
	I0410 22:51:30.174484   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.174495   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:30.174502   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:30.174573   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:30.213189   57719 cri.go:89] found id: ""
	I0410 22:51:30.213214   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.213221   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:30.213227   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:30.213272   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:30.253264   57719 cri.go:89] found id: ""
	I0410 22:51:30.253294   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.253304   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:30.253309   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:30.253357   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:30.289729   57719 cri.go:89] found id: ""
	I0410 22:51:30.289755   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.289767   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:30.289777   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:30.289793   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:30.303387   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:30.303416   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:30.381294   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:30.381315   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:30.381331   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:29.019226   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:31.519681   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:28.150621   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:30.649807   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:30.903662   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:33.401827   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:30.468072   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:30.468110   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:30.508761   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:30.508794   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:33.061654   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:33.077072   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:33.077146   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:33.113753   57719 cri.go:89] found id: ""
	I0410 22:51:33.113781   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.113791   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:33.113798   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:33.113848   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:33.149212   57719 cri.go:89] found id: ""
	I0410 22:51:33.149238   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.149249   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:33.149256   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:33.149321   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:33.185619   57719 cri.go:89] found id: ""
	I0410 22:51:33.185649   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.185659   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:33.185667   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:33.185725   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:33.222270   57719 cri.go:89] found id: ""
	I0410 22:51:33.222301   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.222313   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:33.222320   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:33.222375   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:33.258594   57719 cri.go:89] found id: ""
	I0410 22:51:33.258624   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.258636   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:33.258642   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:33.258689   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:33.298326   57719 cri.go:89] found id: ""
	I0410 22:51:33.298360   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.298368   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:33.298374   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:33.298438   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:33.337407   57719 cri.go:89] found id: ""
	I0410 22:51:33.337438   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.337449   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:33.337456   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:33.337520   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:33.374971   57719 cri.go:89] found id: ""
	I0410 22:51:33.375003   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.375014   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:33.375024   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:33.375039   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:33.415256   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:33.415288   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:33.467895   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:33.467929   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:33.484604   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:33.484639   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:33.562267   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:33.562288   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:33.562299   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:34.017685   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:36.519093   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:32.650396   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:35.150200   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:35.902810   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:38.401463   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:36.142628   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:36.157825   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:36.157883   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:36.199418   57719 cri.go:89] found id: ""
	I0410 22:51:36.199446   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.199456   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:36.199463   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:36.199523   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:36.238136   57719 cri.go:89] found id: ""
	I0410 22:51:36.238166   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.238174   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:36.238180   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:36.238229   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:36.273995   57719 cri.go:89] found id: ""
	I0410 22:51:36.274026   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.274037   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:36.274049   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:36.274110   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:36.311007   57719 cri.go:89] found id: ""
	I0410 22:51:36.311039   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.311049   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:36.311057   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:36.311122   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:36.351062   57719 cri.go:89] found id: ""
	I0410 22:51:36.351086   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.351093   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:36.351099   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:36.351152   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:36.388660   57719 cri.go:89] found id: ""
	I0410 22:51:36.388689   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.388703   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:36.388711   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:36.388762   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:36.428715   57719 cri.go:89] found id: ""
	I0410 22:51:36.428753   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.428761   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:36.428767   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:36.428831   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:36.467186   57719 cri.go:89] found id: ""
	I0410 22:51:36.467213   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.467220   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:36.467228   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:36.467239   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:36.521831   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:36.521860   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:36.536929   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:36.536957   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:36.614624   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:36.614647   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:36.614659   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:36.694604   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:36.694646   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:39.240039   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:39.255177   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:39.255262   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:39.293063   57719 cri.go:89] found id: ""
	I0410 22:51:39.293091   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.293113   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:39.293120   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:39.293181   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:39.331603   57719 cri.go:89] found id: ""
	I0410 22:51:39.331631   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.331639   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:39.331645   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:39.331697   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:39.372881   57719 cri.go:89] found id: ""
	I0410 22:51:39.372908   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.372919   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:39.372926   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:39.372987   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:39.417399   57719 cri.go:89] found id: ""
	I0410 22:51:39.417425   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.417435   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:39.417442   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:39.417503   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:39.458836   57719 cri.go:89] found id: ""
	I0410 22:51:39.458868   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.458877   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:39.458882   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:39.458932   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:39.496436   57719 cri.go:89] found id: ""
	I0410 22:51:39.496460   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.496467   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:39.496474   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:39.496532   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:39.534649   57719 cri.go:89] found id: ""
	I0410 22:51:39.534681   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.534690   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:39.534695   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:39.534754   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:39.571677   57719 cri.go:89] found id: ""
	I0410 22:51:39.571698   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.571705   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:39.571714   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:39.571725   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:39.621445   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:39.621482   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:39.676341   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:39.676382   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:39.691543   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:39.691573   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:39.769452   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:39.769477   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:39.769493   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:39.017483   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:41.020027   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:37.651534   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:40.151404   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:40.401635   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:42.401931   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:44.401972   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:42.350823   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:42.367124   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:42.367199   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:42.407511   57719 cri.go:89] found id: ""
	I0410 22:51:42.407545   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.407554   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:42.407560   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:42.407622   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:42.442913   57719 cri.go:89] found id: ""
	I0410 22:51:42.442948   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.442958   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:42.442964   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:42.443027   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:42.480747   57719 cri.go:89] found id: ""
	I0410 22:51:42.480777   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.480786   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:42.480792   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:42.480846   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:42.521610   57719 cri.go:89] found id: ""
	I0410 22:51:42.521635   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.521644   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:42.521651   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:42.521698   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:42.561076   57719 cri.go:89] found id: ""
	I0410 22:51:42.561108   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.561119   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:42.561127   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:42.561189   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:42.598034   57719 cri.go:89] found id: ""
	I0410 22:51:42.598059   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.598066   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:42.598072   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:42.598129   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:42.637051   57719 cri.go:89] found id: ""
	I0410 22:51:42.637085   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.637095   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:42.637103   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:42.637162   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:42.676051   57719 cri.go:89] found id: ""
	I0410 22:51:42.676084   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.676094   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:42.676105   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:42.676120   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:42.719607   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:42.719634   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:42.770791   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:42.770829   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:42.785704   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:42.785730   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:42.876445   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:42.876475   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:42.876490   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:43.518453   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:46.019450   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:42.650486   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:44.650894   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:47.150370   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:46.901358   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:48.902417   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:45.458721   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:45.474125   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:45.474203   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:45.511105   57719 cri.go:89] found id: ""
	I0410 22:51:45.511143   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.511153   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:45.511161   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:45.511220   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:45.552891   57719 cri.go:89] found id: ""
	I0410 22:51:45.552916   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.552924   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:45.552930   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:45.552986   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:45.592423   57719 cri.go:89] found id: ""
	I0410 22:51:45.592458   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.592474   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:45.592481   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:45.592542   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:45.630964   57719 cri.go:89] found id: ""
	I0410 22:51:45.631009   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.631026   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:45.631033   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:45.631098   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:45.669557   57719 cri.go:89] found id: ""
	I0410 22:51:45.669586   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.669595   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:45.669602   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:45.669702   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:45.706359   57719 cri.go:89] found id: ""
	I0410 22:51:45.706387   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.706395   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:45.706402   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:45.706463   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:45.743301   57719 cri.go:89] found id: ""
	I0410 22:51:45.743330   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.743337   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:45.743343   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:45.743390   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:45.781679   57719 cri.go:89] found id: ""
	I0410 22:51:45.781703   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.781711   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:45.781718   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:45.781730   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:45.835251   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:45.835286   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:45.849255   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:45.849284   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:45.918404   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:45.918436   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:45.918452   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:45.999556   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:45.999591   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:48.546421   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:48.561243   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:48.561314   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:48.618335   57719 cri.go:89] found id: ""
	I0410 22:51:48.618361   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.618369   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:48.618375   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:48.618445   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:48.656116   57719 cri.go:89] found id: ""
	I0410 22:51:48.656151   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.656160   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:48.656167   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:48.656222   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:48.694846   57719 cri.go:89] found id: ""
	I0410 22:51:48.694874   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.694884   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:48.694897   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:48.694971   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:48.731988   57719 cri.go:89] found id: ""
	I0410 22:51:48.732020   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.732031   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:48.732039   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:48.732102   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:48.768595   57719 cri.go:89] found id: ""
	I0410 22:51:48.768627   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.768636   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:48.768643   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:48.768708   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:48.807263   57719 cri.go:89] found id: ""
	I0410 22:51:48.807292   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.807302   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:48.807308   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:48.807366   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:48.845291   57719 cri.go:89] found id: ""
	I0410 22:51:48.845317   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.845325   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:48.845329   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:48.845399   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:48.891056   57719 cri.go:89] found id: ""
	I0410 22:51:48.891081   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.891091   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:48.891102   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:48.891117   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:48.931963   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:48.931992   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:48.985539   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:48.985579   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:49.000685   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:49.000716   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:49.076097   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:49.076127   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:49.076143   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:48.517879   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:51.018479   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:49.150511   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:51.650519   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:51.400971   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:53.401596   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:51.663336   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:51.678249   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:51.678315   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:51.720062   57719 cri.go:89] found id: ""
	I0410 22:51:51.720088   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.720096   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:51.720103   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:51.720164   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:51.766351   57719 cri.go:89] found id: ""
	I0410 22:51:51.766387   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.766395   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:51.766401   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:51.766448   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:51.813037   57719 cri.go:89] found id: ""
	I0410 22:51:51.813068   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.813080   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:51.813087   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:51.813150   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:51.849232   57719 cri.go:89] found id: ""
	I0410 22:51:51.849262   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.849273   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:51.849280   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:51.849346   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:51.886392   57719 cri.go:89] found id: ""
	I0410 22:51:51.886415   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.886422   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:51.886428   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:51.886485   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:51.930859   57719 cri.go:89] found id: ""
	I0410 22:51:51.930896   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.930905   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:51.930913   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:51.930978   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:51.970403   57719 cri.go:89] found id: ""
	I0410 22:51:51.970501   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.970524   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:51.970533   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:51.970599   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:52.008281   57719 cri.go:89] found id: ""
	I0410 22:51:52.008311   57719 logs.go:276] 0 containers: []
	W0410 22:51:52.008322   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:52.008333   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:52.008347   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:52.060623   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:52.060656   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:52.075529   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:52.075559   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:52.158330   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:52.158356   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:52.158371   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:52.236356   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:52.236392   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:54.782448   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:54.796928   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:54.796997   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:54.836297   57719 cri.go:89] found id: ""
	I0410 22:51:54.836326   57719 logs.go:276] 0 containers: []
	W0410 22:51:54.836335   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:54.836341   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:54.836390   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:54.873501   57719 cri.go:89] found id: ""
	I0410 22:51:54.873532   57719 logs.go:276] 0 containers: []
	W0410 22:51:54.873540   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:54.873547   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:54.873617   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:54.914200   57719 cri.go:89] found id: ""
	I0410 22:51:54.914227   57719 logs.go:276] 0 containers: []
	W0410 22:51:54.914238   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:54.914247   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:54.914308   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:54.958654   57719 cri.go:89] found id: ""
	I0410 22:51:54.958682   57719 logs.go:276] 0 containers: []
	W0410 22:51:54.958693   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:54.958702   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:54.958761   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:55.017032   57719 cri.go:89] found id: ""
	I0410 22:51:55.017078   57719 logs.go:276] 0 containers: []
	W0410 22:51:55.017090   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:55.017101   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:55.017167   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:55.093024   57719 cri.go:89] found id: ""
	I0410 22:51:55.093059   57719 logs.go:276] 0 containers: []
	W0410 22:51:55.093070   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:55.093085   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:55.093156   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:55.142412   57719 cri.go:89] found id: ""
	I0410 22:51:55.142441   57719 logs.go:276] 0 containers: []
	W0410 22:51:55.142456   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:55.142464   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:55.142521   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:55.180116   57719 cri.go:89] found id: ""
	I0410 22:51:55.180147   57719 logs.go:276] 0 containers: []
	W0410 22:51:55.180159   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:55.180169   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:55.180186   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:55.249118   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:55.249139   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:55.249153   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:55.327558   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:55.327597   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:55.373127   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:55.373163   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:53.518589   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:56.017080   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:54.151372   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:56.650238   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:55.401716   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:57.902174   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:55.431602   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:55.431647   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:57.947559   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:57.962916   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:57.962983   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:58.000955   57719 cri.go:89] found id: ""
	I0410 22:51:58.000983   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.000990   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:58.000997   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:58.001049   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:58.040556   57719 cri.go:89] found id: ""
	I0410 22:51:58.040579   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.040586   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:58.040592   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:58.040649   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:58.079121   57719 cri.go:89] found id: ""
	I0410 22:51:58.079148   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.079155   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:58.079161   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:58.079240   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:58.119876   57719 cri.go:89] found id: ""
	I0410 22:51:58.119902   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.119914   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:58.119929   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:58.119987   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:58.160130   57719 cri.go:89] found id: ""
	I0410 22:51:58.160162   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.160173   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:58.160181   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:58.160258   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:58.198162   57719 cri.go:89] found id: ""
	I0410 22:51:58.198195   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.198207   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:58.198215   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:58.198266   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:58.235049   57719 cri.go:89] found id: ""
	I0410 22:51:58.235078   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.235089   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:58.235096   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:58.235157   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:58.275786   57719 cri.go:89] found id: ""
	I0410 22:51:58.275825   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.275845   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:58.275856   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:58.275872   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:58.316246   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:58.316277   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:58.371614   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:58.371649   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:58.386610   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:58.386646   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:58.465167   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:58.465187   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:58.465199   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:58.018362   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:00.517710   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:59.152119   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:01.650566   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:00.401148   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:02.401494   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:04.401624   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:01.049405   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:01.073251   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:01.073328   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:01.125169   57719 cri.go:89] found id: ""
	I0410 22:52:01.125201   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.125212   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:01.125220   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:01.125289   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:01.171256   57719 cri.go:89] found id: ""
	I0410 22:52:01.171289   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.171300   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:01.171308   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:01.171376   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:01.210444   57719 cri.go:89] found id: ""
	I0410 22:52:01.210478   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.210489   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:01.210503   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:01.210568   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:01.252448   57719 cri.go:89] found id: ""
	I0410 22:52:01.252473   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.252480   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:01.252486   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:01.252531   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:01.293084   57719 cri.go:89] found id: ""
	I0410 22:52:01.293117   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.293128   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:01.293136   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:01.293208   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:01.330992   57719 cri.go:89] found id: ""
	I0410 22:52:01.331019   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.331026   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:01.331032   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:01.331081   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:01.369286   57719 cri.go:89] found id: ""
	I0410 22:52:01.369315   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.369325   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:01.369331   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:01.369378   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:01.409888   57719 cri.go:89] found id: ""
	I0410 22:52:01.409916   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.409924   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:01.409933   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:01.409944   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:01.484535   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:01.484557   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:01.484569   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:01.565727   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:01.565778   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:01.606987   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:01.607018   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:01.659492   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:01.659529   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:04.174971   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:04.190302   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:04.190382   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:04.230050   57719 cri.go:89] found id: ""
	I0410 22:52:04.230080   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.230090   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:04.230097   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:04.230162   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:04.269870   57719 cri.go:89] found id: ""
	I0410 22:52:04.269902   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.269908   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:04.269914   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:04.269969   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:04.310977   57719 cri.go:89] found id: ""
	I0410 22:52:04.311008   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.311019   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:04.311026   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:04.311096   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:04.349108   57719 cri.go:89] found id: ""
	I0410 22:52:04.349136   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.349147   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:04.349154   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:04.349216   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:04.389590   57719 cri.go:89] found id: ""
	I0410 22:52:04.389613   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.389625   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:04.389633   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:04.389697   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:04.432962   57719 cri.go:89] found id: ""
	I0410 22:52:04.432989   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.433001   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:04.433008   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:04.433070   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:04.473912   57719 cri.go:89] found id: ""
	I0410 22:52:04.473946   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.473955   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:04.473960   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:04.474029   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:04.516157   57719 cri.go:89] found id: ""
	I0410 22:52:04.516182   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.516192   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:04.516203   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:04.516218   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:04.569047   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:04.569082   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:04.622639   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:04.622673   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:04.638441   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:04.638470   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:04.718203   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:04.718227   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:04.718241   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:02.518104   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:04.519509   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:06.519648   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:04.150041   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:06.150157   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:06.902111   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:08.902816   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:07.302147   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:07.315919   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:07.315984   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:07.354692   57719 cri.go:89] found id: ""
	I0410 22:52:07.354723   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.354733   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:07.354740   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:07.354803   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:07.393418   57719 cri.go:89] found id: ""
	I0410 22:52:07.393447   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.393459   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:07.393466   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:07.393525   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:07.436810   57719 cri.go:89] found id: ""
	I0410 22:52:07.436837   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.436847   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:07.436855   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:07.436920   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:07.478685   57719 cri.go:89] found id: ""
	I0410 22:52:07.478709   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.478720   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:07.478735   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:07.478792   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:07.515699   57719 cri.go:89] found id: ""
	I0410 22:52:07.515727   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.515737   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:07.515744   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:07.515805   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:07.556419   57719 cri.go:89] found id: ""
	I0410 22:52:07.556443   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.556451   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:07.556457   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:07.556560   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:07.598076   57719 cri.go:89] found id: ""
	I0410 22:52:07.598106   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.598113   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:07.598119   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:07.598183   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:07.637778   57719 cri.go:89] found id: ""
	I0410 22:52:07.637814   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.637826   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:07.637839   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:07.637854   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:07.693688   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:07.693728   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:07.709256   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:07.709289   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:07.778519   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:07.778544   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:07.778584   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:07.858937   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:07.858973   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:10.405765   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:10.422019   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:10.422083   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:09.017771   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:11.017883   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:08.151568   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:10.650989   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:11.402181   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:13.902520   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:10.463779   57719 cri.go:89] found id: ""
	I0410 22:52:10.463818   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.463829   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:10.463836   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:10.463923   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:10.503680   57719 cri.go:89] found id: ""
	I0410 22:52:10.503710   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.503718   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:10.503736   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:10.503804   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:10.545567   57719 cri.go:89] found id: ""
	I0410 22:52:10.545594   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.545605   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:10.545613   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:10.545671   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:10.590864   57719 cri.go:89] found id: ""
	I0410 22:52:10.590892   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.590901   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:10.590908   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:10.590968   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:10.634628   57719 cri.go:89] found id: ""
	I0410 22:52:10.634659   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.634670   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:10.634677   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:10.634758   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:10.681477   57719 cri.go:89] found id: ""
	I0410 22:52:10.681507   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.681526   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:10.681533   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:10.681585   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:10.725203   57719 cri.go:89] found id: ""
	I0410 22:52:10.725229   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.725328   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:10.725368   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:10.725443   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:10.764994   57719 cri.go:89] found id: ""
	I0410 22:52:10.765028   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.765036   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:10.765044   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:10.765094   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:10.808981   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:10.809012   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:10.866429   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:10.866468   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:10.882512   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:10.882537   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:10.963016   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:10.963041   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:10.963053   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:13.544552   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:13.558161   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:13.558238   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:13.596945   57719 cri.go:89] found id: ""
	I0410 22:52:13.596977   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.596988   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:13.596996   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:13.597057   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:13.637920   57719 cri.go:89] found id: ""
	I0410 22:52:13.637944   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.637951   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:13.637958   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:13.638012   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:13.676777   57719 cri.go:89] found id: ""
	I0410 22:52:13.676808   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.676819   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:13.676826   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:13.676887   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:13.714054   57719 cri.go:89] found id: ""
	I0410 22:52:13.714078   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.714086   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:13.714091   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:13.714142   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:13.757162   57719 cri.go:89] found id: ""
	I0410 22:52:13.757194   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.757206   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:13.757214   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:13.757276   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:13.793578   57719 cri.go:89] found id: ""
	I0410 22:52:13.793616   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.793629   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:13.793636   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:13.793697   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:13.831307   57719 cri.go:89] found id: ""
	I0410 22:52:13.831336   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.831346   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:13.831353   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:13.831400   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:13.872072   57719 cri.go:89] found id: ""
	I0410 22:52:13.872109   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.872117   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:13.872127   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:13.872143   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:13.926909   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:13.926947   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:13.943095   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:13.943126   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:14.015301   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:14.015336   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:14.015351   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:14.101100   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:14.101137   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:13.019599   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:15.517932   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:13.150248   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:15.650269   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:16.401396   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:18.402384   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:16.650213   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:16.664603   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:16.664677   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:16.701498   57719 cri.go:89] found id: ""
	I0410 22:52:16.701527   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.701539   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:16.701547   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:16.701618   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:16.740687   57719 cri.go:89] found id: ""
	I0410 22:52:16.740716   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.740725   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:16.740730   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:16.740789   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:16.777349   57719 cri.go:89] found id: ""
	I0410 22:52:16.777372   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.777380   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:16.777385   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:16.777454   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:16.819855   57719 cri.go:89] found id: ""
	I0410 22:52:16.819890   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.819900   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:16.819909   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:16.819973   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:16.859939   57719 cri.go:89] found id: ""
	I0410 22:52:16.859970   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.859981   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:16.859991   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:16.860056   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:16.897861   57719 cri.go:89] found id: ""
	I0410 22:52:16.897886   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.897893   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:16.897899   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:16.897962   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:16.935642   57719 cri.go:89] found id: ""
	I0410 22:52:16.935673   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.935681   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:16.935687   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:16.935733   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:16.974268   57719 cri.go:89] found id: ""
	I0410 22:52:16.974294   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.974302   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:16.974311   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:16.974327   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:17.027850   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:17.027888   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:17.043343   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:17.043379   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:17.120945   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:17.120967   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:17.120979   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:17.204831   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:17.204868   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:19.749712   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:19.764102   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:19.764181   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:19.800759   57719 cri.go:89] found id: ""
	I0410 22:52:19.800787   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.800795   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:19.800801   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:19.800851   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:19.839678   57719 cri.go:89] found id: ""
	I0410 22:52:19.839711   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.839723   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:19.839730   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:19.839791   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:19.876983   57719 cri.go:89] found id: ""
	I0410 22:52:19.877007   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.877015   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:19.877020   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:19.877081   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:19.918139   57719 cri.go:89] found id: ""
	I0410 22:52:19.918167   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.918177   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:19.918186   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:19.918243   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:19.954770   57719 cri.go:89] found id: ""
	I0410 22:52:19.954808   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.954818   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:19.954825   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:19.954881   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:19.993643   57719 cri.go:89] found id: ""
	I0410 22:52:19.993670   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.993680   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:19.993687   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:19.993746   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:20.030466   57719 cri.go:89] found id: ""
	I0410 22:52:20.030494   57719 logs.go:276] 0 containers: []
	W0410 22:52:20.030503   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:20.030510   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:20.030575   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:20.069264   57719 cri.go:89] found id: ""
	I0410 22:52:20.069291   57719 logs.go:276] 0 containers: []
	W0410 22:52:20.069299   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:20.069307   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:20.069318   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:20.117354   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:20.117382   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:20.170758   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:20.170800   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:20.187014   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:20.187055   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:20.269620   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:20.269645   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:20.269661   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:17.518440   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:20.018602   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:18.151102   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:20.151664   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:20.901836   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:23.401655   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:22.844841   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:22.861923   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:22.861983   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:22.907972   57719 cri.go:89] found id: ""
	I0410 22:52:22.908000   57719 logs.go:276] 0 containers: []
	W0410 22:52:22.908010   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:22.908017   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:22.908081   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:22.949822   57719 cri.go:89] found id: ""
	I0410 22:52:22.949851   57719 logs.go:276] 0 containers: []
	W0410 22:52:22.949861   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:22.949869   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:22.949935   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:22.989872   57719 cri.go:89] found id: ""
	I0410 22:52:22.989895   57719 logs.go:276] 0 containers: []
	W0410 22:52:22.989902   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:22.989908   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:22.989959   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:23.031881   57719 cri.go:89] found id: ""
	I0410 22:52:23.031900   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.031908   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:23.031913   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:23.031978   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:23.071691   57719 cri.go:89] found id: ""
	I0410 22:52:23.071719   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.071726   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:23.071732   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:23.071792   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:23.109961   57719 cri.go:89] found id: ""
	I0410 22:52:23.109990   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.110001   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:23.110009   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:23.110069   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:23.152955   57719 cri.go:89] found id: ""
	I0410 22:52:23.152979   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.152986   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:23.152991   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:23.153054   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:23.191883   57719 cri.go:89] found id: ""
	I0410 22:52:23.191924   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.191935   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:23.191947   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:23.191959   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:23.232692   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:23.232731   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:23.283648   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:23.283684   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:23.297701   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:23.297729   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:23.381657   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:23.381673   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:23.381685   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:22.520899   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:25.016955   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:27.018541   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:22.650053   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:25.150370   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:25.402084   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:27.402670   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:25.961531   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:25.977539   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:25.977639   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:26.021844   57719 cri.go:89] found id: ""
	I0410 22:52:26.021875   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.021886   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:26.021893   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:26.021954   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:26.064286   57719 cri.go:89] found id: ""
	I0410 22:52:26.064316   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.064327   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:26.064335   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:26.064394   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:26.104381   57719 cri.go:89] found id: ""
	I0410 22:52:26.104426   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.104437   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:26.104445   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:26.104522   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:26.143382   57719 cri.go:89] found id: ""
	I0410 22:52:26.143407   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.143417   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:26.143424   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:26.143489   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:26.179609   57719 cri.go:89] found id: ""
	I0410 22:52:26.179635   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.179646   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:26.179652   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:26.179714   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:26.217660   57719 cri.go:89] found id: ""
	I0410 22:52:26.217689   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.217695   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:26.217701   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:26.217758   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:26.254914   57719 cri.go:89] found id: ""
	I0410 22:52:26.254946   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.254956   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:26.254963   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:26.255047   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:26.293738   57719 cri.go:89] found id: ""
	I0410 22:52:26.293769   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.293779   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:26.293790   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:26.293809   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:26.366700   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:26.366725   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:26.366741   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:26.445143   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:26.445183   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:26.493175   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:26.493203   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:26.554952   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:26.554992   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:29.072225   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:29.087075   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:29.087150   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:29.131314   57719 cri.go:89] found id: ""
	I0410 22:52:29.131345   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.131357   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:29.131365   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:29.131427   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:29.169263   57719 cri.go:89] found id: ""
	I0410 22:52:29.169289   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.169298   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:29.169304   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:29.169357   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:29.209535   57719 cri.go:89] found id: ""
	I0410 22:52:29.209559   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.209570   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:29.209575   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:29.209630   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:29.251172   57719 cri.go:89] found id: ""
	I0410 22:52:29.251225   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.251233   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:29.251238   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:29.251290   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:29.296142   57719 cri.go:89] found id: ""
	I0410 22:52:29.296169   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.296179   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:29.296185   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:29.296245   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:29.336910   57719 cri.go:89] found id: ""
	I0410 22:52:29.336933   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.336940   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:29.336946   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:29.337003   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:29.396332   57719 cri.go:89] found id: ""
	I0410 22:52:29.396371   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.396382   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:29.396390   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:29.396475   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:29.438301   57719 cri.go:89] found id: ""
	I0410 22:52:29.438332   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.438340   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:29.438348   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:29.438360   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:29.482687   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:29.482711   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:29.535115   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:29.535146   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:29.551736   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:29.551760   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:29.624162   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:29.624198   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:29.624213   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:29.517873   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:31.519737   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:27.650947   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:29.651296   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:32.150101   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:29.901370   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:31.902050   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:34.401849   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:32.204355   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:32.218239   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:32.218310   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:32.255412   57719 cri.go:89] found id: ""
	I0410 22:52:32.255440   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.255451   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:32.255458   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:32.255516   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:32.293553   57719 cri.go:89] found id: ""
	I0410 22:52:32.293580   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.293591   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:32.293604   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:32.293663   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:32.332814   57719 cri.go:89] found id: ""
	I0410 22:52:32.332846   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.332855   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:32.332862   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:32.332924   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:32.371312   57719 cri.go:89] found id: ""
	I0410 22:52:32.371347   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.371368   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:32.371376   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:32.371441   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:32.407630   57719 cri.go:89] found id: ""
	I0410 22:52:32.407652   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.407659   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:32.407664   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:32.407720   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:32.444878   57719 cri.go:89] found id: ""
	I0410 22:52:32.444904   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.444914   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:32.444923   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:32.444989   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:32.490540   57719 cri.go:89] found id: ""
	I0410 22:52:32.490567   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.490578   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:32.490586   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:32.490644   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:32.528911   57719 cri.go:89] found id: ""
	I0410 22:52:32.528953   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.528961   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:32.528969   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:32.528979   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:32.608601   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:32.608626   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:32.608641   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:32.684840   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:32.684876   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:32.728092   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:32.728132   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:32.778491   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:32.778524   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:35.296228   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:35.310615   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:35.310705   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:35.377585   57719 cri.go:89] found id: ""
	I0410 22:52:35.377612   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.377623   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:35.377632   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:35.377692   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:35.417734   57719 cri.go:89] found id: ""
	I0410 22:52:35.417775   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.417796   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:35.417803   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:35.417864   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:34.017119   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:36.017526   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:34.150859   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:36.151112   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:36.402036   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:38.402201   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:35.456256   57719 cri.go:89] found id: ""
	I0410 22:52:35.456281   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.456291   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:35.456298   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:35.456382   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:35.495233   57719 cri.go:89] found id: ""
	I0410 22:52:35.495257   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.495267   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:35.495274   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:35.495333   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:35.535239   57719 cri.go:89] found id: ""
	I0410 22:52:35.535273   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.535284   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:35.535292   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:35.535352   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:35.571601   57719 cri.go:89] found id: ""
	I0410 22:52:35.571628   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.571638   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:35.571645   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:35.571708   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:35.612008   57719 cri.go:89] found id: ""
	I0410 22:52:35.612036   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.612045   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:35.612051   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:35.612099   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:35.649029   57719 cri.go:89] found id: ""
	I0410 22:52:35.649057   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.649065   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:35.649073   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:35.649084   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:35.702630   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:35.702668   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:35.718404   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:35.718433   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:35.798380   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:35.798405   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:35.798420   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:35.874049   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:35.874085   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:38.416265   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:38.430921   57719 kubeadm.go:591] duration metric: took 4m3.090666464s to restartPrimaryControlPlane
	W0410 22:52:38.431006   57719 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0410 22:52:38.431030   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0410 22:52:41.138973   57719 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.707913754s)
	I0410 22:52:41.139063   57719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:52:41.155646   57719 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:52:41.166345   57719 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:52:41.176443   57719 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:52:41.176481   57719 kubeadm.go:156] found existing configuration files:
	
	I0410 22:52:41.176547   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:52:41.186887   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:52:41.186960   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:52:41.199740   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:52:41.209843   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:52:41.209901   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:52:41.219804   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:52:41.229739   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:52:41.229807   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:52:41.240127   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:52:41.249763   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:52:41.249824   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:52:41.260148   57719 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 22:52:41.334127   57719 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0410 22:52:41.334200   57719 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 22:52:41.506104   57719 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 22:52:41.506307   57719 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 22:52:41.506488   57719 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 22:52:41.715227   57719 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 22:52:38.519180   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:41.018674   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:38.649983   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:41.152610   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:41.717460   57719 out.go:204]   - Generating certificates and keys ...
	I0410 22:52:41.717564   57719 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 22:52:41.717654   57719 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 22:52:41.717781   57719 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0410 22:52:41.717898   57719 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0410 22:52:41.718004   57719 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0410 22:52:41.718099   57719 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0410 22:52:41.718203   57719 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0410 22:52:41.718550   57719 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0410 22:52:41.719083   57719 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0410 22:52:41.719413   57719 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0410 22:52:41.719571   57719 kubeadm.go:309] [certs] Using the existing "sa" key
	I0410 22:52:41.719675   57719 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 22:52:41.998202   57719 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 22:52:42.109508   57719 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 22:52:42.315545   57719 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 22:52:42.448910   57719 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 22:52:42.465903   57719 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 22:52:42.467312   57719 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 22:52:42.467387   57719 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 22:52:42.636790   57719 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 22:52:40.402237   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:42.404435   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:42.638969   57719 out.go:204]   - Booting up control plane ...
	I0410 22:52:42.639106   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 22:52:42.652152   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 22:52:42.653843   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 22:52:42.654719   57719 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 22:52:42.658006   57719 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0410 22:52:43.518416   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:46.017894   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:43.650778   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:46.149976   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:44.902059   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:46.902549   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:49.401695   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:48.517833   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:51.018924   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:48.150825   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:50.151391   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:51.901096   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:53.902619   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:53.518616   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:55.519254   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:52.649783   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:54.651766   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:56.655687   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:55.903916   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:58.400789   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:58.017685   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:00.517303   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:59.152346   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:01.651146   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:00.901531   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:03.400690   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:02.517569   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:04.517775   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:07.017655   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:03.651728   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:05.652505   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:05.901605   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:07.902363   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:09.018576   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:11.510820   58186 pod_ready.go:81] duration metric: took 4m0.000124062s for pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace to be "Ready" ...
	E0410 22:53:11.510861   58186 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0410 22:53:11.510885   58186 pod_ready.go:38] duration metric: took 4m10.548289153s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:53:11.510918   58186 kubeadm.go:591] duration metric: took 4m18.480793797s to restartPrimaryControlPlane
	W0410 22:53:11.510993   58186 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0410 22:53:11.511019   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0410 22:53:08.151155   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:10.151358   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:10.400722   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:12.401658   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:14.401745   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:12.652391   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:14.652682   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:17.149892   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:16.900482   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:18.900789   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:19.152154   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:21.649975   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:20.902068   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:23.401500   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:22.660165   57719 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0410 22:53:22.660260   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:53:22.660520   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:53:23.653457   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:26.149469   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:25.903070   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:28.400947   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:27.660705   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:53:27.660919   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:53:28.150895   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:30.650254   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:30.401054   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:32.401994   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:32.654427   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:35.149580   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:37.150506   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:37.150533   58701 pod_ready.go:81] duration metric: took 4m0.00757056s for pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace to be "Ready" ...
	E0410 22:53:37.150544   58701 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0410 22:53:37.150552   58701 pod_ready.go:38] duration metric: took 4m5.55870495s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:53:37.150570   58701 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:53:37.150602   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:53:37.150659   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:53:37.213472   58701 cri.go:89] found id: "74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:37.213499   58701 cri.go:89] found id: ""
	I0410 22:53:37.213511   58701 logs.go:276] 1 containers: [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c]
	I0410 22:53:37.213561   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.218928   58701 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:53:37.218997   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:53:37.260045   58701 cri.go:89] found id: "34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:37.260066   58701 cri.go:89] found id: ""
	I0410 22:53:37.260073   58701 logs.go:276] 1 containers: [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072]
	I0410 22:53:37.260116   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.265329   58701 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:53:37.265393   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:53:37.306649   58701 cri.go:89] found id: "d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:37.306674   58701 cri.go:89] found id: ""
	I0410 22:53:37.306682   58701 logs.go:276] 1 containers: [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3]
	I0410 22:53:37.306729   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.311163   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:53:37.311213   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:53:37.351855   58701 cri.go:89] found id: "b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:37.351883   58701 cri.go:89] found id: ""
	I0410 22:53:37.351890   58701 logs.go:276] 1 containers: [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14]
	I0410 22:53:37.351937   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.356427   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:53:37.356497   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:53:34.900998   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:36.901173   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:39.400680   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:37.661409   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:53:37.661698   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:53:37.399224   58701 cri.go:89] found id: "7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:37.399248   58701 cri.go:89] found id: ""
	I0410 22:53:37.399257   58701 logs.go:276] 1 containers: [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b]
	I0410 22:53:37.399315   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.404314   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:53:37.404380   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:53:37.444169   58701 cri.go:89] found id: "c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:37.444196   58701 cri.go:89] found id: ""
	I0410 22:53:37.444205   58701 logs.go:276] 1 containers: [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39]
	I0410 22:53:37.444264   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.448618   58701 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:53:37.448693   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:53:37.487481   58701 cri.go:89] found id: ""
	I0410 22:53:37.487507   58701 logs.go:276] 0 containers: []
	W0410 22:53:37.487514   58701 logs.go:278] No container was found matching "kindnet"
	I0410 22:53:37.487519   58701 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0410 22:53:37.487566   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0410 22:53:37.531000   58701 cri.go:89] found id: "3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:37.531018   58701 cri.go:89] found id: "912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:37.531022   58701 cri.go:89] found id: ""
	I0410 22:53:37.531029   58701 logs.go:276] 2 containers: [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7]
	I0410 22:53:37.531081   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.535679   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.539974   58701 logs.go:123] Gathering logs for kubelet ...
	I0410 22:53:37.539998   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:53:37.601043   58701 logs.go:123] Gathering logs for dmesg ...
	I0410 22:53:37.601086   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:53:37.616427   58701 logs.go:123] Gathering logs for kube-apiserver [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c] ...
	I0410 22:53:37.616458   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:37.669951   58701 logs.go:123] Gathering logs for etcd [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072] ...
	I0410 22:53:37.669983   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:37.716243   58701 logs.go:123] Gathering logs for kube-controller-manager [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39] ...
	I0410 22:53:37.716273   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:37.774644   58701 logs.go:123] Gathering logs for storage-provisioner [912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7] ...
	I0410 22:53:37.774678   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:37.821033   58701 logs.go:123] Gathering logs for container status ...
	I0410 22:53:37.821077   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:53:37.883644   58701 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:53:37.883678   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0410 22:53:38.019289   58701 logs.go:123] Gathering logs for coredns [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3] ...
	I0410 22:53:38.019320   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:38.057708   58701 logs.go:123] Gathering logs for kube-scheduler [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14] ...
	I0410 22:53:38.057739   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:38.100119   58701 logs.go:123] Gathering logs for kube-proxy [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b] ...
	I0410 22:53:38.100149   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:38.143845   58701 logs.go:123] Gathering logs for storage-provisioner [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195] ...
	I0410 22:53:38.143875   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:38.186718   58701 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:53:38.186749   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:53:41.168951   58701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:53:41.186828   58701 api_server.go:72] duration metric: took 4m17.343179611s to wait for apiserver process to appear ...
	I0410 22:53:41.186866   58701 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:53:41.186911   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:53:41.186972   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:53:41.228167   58701 cri.go:89] found id: "74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:41.228194   58701 cri.go:89] found id: ""
	I0410 22:53:41.228201   58701 logs.go:276] 1 containers: [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c]
	I0410 22:53:41.228251   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.232754   58701 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:53:41.232812   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:53:41.271497   58701 cri.go:89] found id: "34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:41.271519   58701 cri.go:89] found id: ""
	I0410 22:53:41.271527   58701 logs.go:276] 1 containers: [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072]
	I0410 22:53:41.271575   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.276165   58701 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:53:41.276234   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:53:41.319164   58701 cri.go:89] found id: "d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:41.319187   58701 cri.go:89] found id: ""
	I0410 22:53:41.319195   58701 logs.go:276] 1 containers: [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3]
	I0410 22:53:41.319251   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.323627   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:53:41.323696   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:53:41.366648   58701 cri.go:89] found id: "b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:41.366671   58701 cri.go:89] found id: ""
	I0410 22:53:41.366678   58701 logs.go:276] 1 containers: [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14]
	I0410 22:53:41.366733   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.371132   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:53:41.371197   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:53:41.412956   58701 cri.go:89] found id: "7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:41.412974   58701 cri.go:89] found id: ""
	I0410 22:53:41.412982   58701 logs.go:276] 1 containers: [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b]
	I0410 22:53:41.413034   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.417441   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:53:41.417495   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:53:41.460008   58701 cri.go:89] found id: "c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:41.460037   58701 cri.go:89] found id: ""
	I0410 22:53:41.460048   58701 logs.go:276] 1 containers: [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39]
	I0410 22:53:41.460105   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.464422   58701 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:53:41.464492   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:53:41.504095   58701 cri.go:89] found id: ""
	I0410 22:53:41.504126   58701 logs.go:276] 0 containers: []
	W0410 22:53:41.504134   58701 logs.go:278] No container was found matching "kindnet"
	I0410 22:53:41.504140   58701 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0410 22:53:41.504199   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0410 22:53:41.543443   58701 cri.go:89] found id: "3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:41.543467   58701 cri.go:89] found id: "912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:41.543473   58701 cri.go:89] found id: ""
	I0410 22:53:41.543481   58701 logs.go:276] 2 containers: [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7]
	I0410 22:53:41.543540   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.548182   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.552917   58701 logs.go:123] Gathering logs for kube-proxy [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b] ...
	I0410 22:53:41.552941   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:41.601620   58701 logs.go:123] Gathering logs for kube-controller-manager [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39] ...
	I0410 22:53:41.601652   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:41.653090   58701 logs.go:123] Gathering logs for storage-provisioner [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195] ...
	I0410 22:53:41.653124   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:41.692683   58701 logs.go:123] Gathering logs for storage-provisioner [912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7] ...
	I0410 22:53:41.692711   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:41.736312   58701 logs.go:123] Gathering logs for dmesg ...
	I0410 22:53:41.736353   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:53:41.753242   58701 logs.go:123] Gathering logs for kube-apiserver [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c] ...
	I0410 22:53:41.753283   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:41.812881   58701 logs.go:123] Gathering logs for etcd [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072] ...
	I0410 22:53:41.812910   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:41.860686   58701 logs.go:123] Gathering logs for coredns [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3] ...
	I0410 22:53:41.860714   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:41.902523   58701 logs.go:123] Gathering logs for container status ...
	I0410 22:53:41.902546   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:53:41.945812   58701 logs.go:123] Gathering logs for kubelet ...
	I0410 22:53:41.945848   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:53:42.001012   58701 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:53:42.001046   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0410 22:53:42.123971   58701 logs.go:123] Gathering logs for kube-scheduler [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14] ...
	I0410 22:53:42.124000   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:42.168773   58701 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:53:42.168806   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:53:41.405604   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:43.901172   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:43.595677   58186 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.084634816s)
	I0410 22:53:43.595765   58186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:53:43.613470   58186 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:53:43.624876   58186 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:53:43.638564   58186 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:53:43.638592   58186 kubeadm.go:156] found existing configuration files:
	
	I0410 22:53:43.638641   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:53:43.652554   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:53:43.652608   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:53:43.664263   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:53:43.674443   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:53:43.674497   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:53:43.695444   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:53:43.705446   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:53:43.705518   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:53:43.716451   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:53:43.726343   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:53:43.726407   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:53:43.736859   58186 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 22:53:43.957994   58186 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 22:53:45.115742   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:53:45.120239   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 200:
	ok
	I0410 22:53:45.121662   58701 api_server.go:141] control plane version: v1.29.3
	I0410 22:53:45.121690   58701 api_server.go:131] duration metric: took 3.934815447s to wait for apiserver health ...
	I0410 22:53:45.121699   58701 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:53:45.121727   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:53:45.121780   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:53:45.172291   58701 cri.go:89] found id: "74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:45.172315   58701 cri.go:89] found id: ""
	I0410 22:53:45.172324   58701 logs.go:276] 1 containers: [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c]
	I0410 22:53:45.172382   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.177041   58701 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:53:45.177103   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:53:45.213853   58701 cri.go:89] found id: "34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:45.213880   58701 cri.go:89] found id: ""
	I0410 22:53:45.213889   58701 logs.go:276] 1 containers: [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072]
	I0410 22:53:45.213944   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.218478   58701 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:53:45.218546   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:53:45.268753   58701 cri.go:89] found id: "d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:45.268779   58701 cri.go:89] found id: ""
	I0410 22:53:45.268792   58701 logs.go:276] 1 containers: [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3]
	I0410 22:53:45.268843   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.273223   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:53:45.273291   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:53:45.314032   58701 cri.go:89] found id: "b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:45.314057   58701 cri.go:89] found id: ""
	I0410 22:53:45.314066   58701 logs.go:276] 1 containers: [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14]
	I0410 22:53:45.314115   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.318671   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:53:45.318740   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:53:45.356139   58701 cri.go:89] found id: "7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:45.356167   58701 cri.go:89] found id: ""
	I0410 22:53:45.356177   58701 logs.go:276] 1 containers: [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b]
	I0410 22:53:45.356234   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.361449   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:53:45.361520   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:53:45.405153   58701 cri.go:89] found id: "c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:45.405174   58701 cri.go:89] found id: ""
	I0410 22:53:45.405181   58701 logs.go:276] 1 containers: [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39]
	I0410 22:53:45.405230   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.409795   58701 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:53:45.409871   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:53:45.451984   58701 cri.go:89] found id: ""
	I0410 22:53:45.452016   58701 logs.go:276] 0 containers: []
	W0410 22:53:45.452026   58701 logs.go:278] No container was found matching "kindnet"
	I0410 22:53:45.452034   58701 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0410 22:53:45.452095   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0410 22:53:45.491612   58701 cri.go:89] found id: "3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:45.491650   58701 cri.go:89] found id: "912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:45.491656   58701 cri.go:89] found id: ""
	I0410 22:53:45.491665   58701 logs.go:276] 2 containers: [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7]
	I0410 22:53:45.491724   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.496253   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.500723   58701 logs.go:123] Gathering logs for kube-proxy [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b] ...
	I0410 22:53:45.500751   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:45.557083   58701 logs.go:123] Gathering logs for kube-controller-manager [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39] ...
	I0410 22:53:45.557118   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:45.616768   58701 logs.go:123] Gathering logs for storage-provisioner [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195] ...
	I0410 22:53:45.616804   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:45.664097   58701 logs.go:123] Gathering logs for storage-provisioner [912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7] ...
	I0410 22:53:45.664133   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:45.707920   58701 logs.go:123] Gathering logs for container status ...
	I0410 22:53:45.707957   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:53:45.751862   58701 logs.go:123] Gathering logs for kube-apiserver [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c] ...
	I0410 22:53:45.751898   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:45.806584   58701 logs.go:123] Gathering logs for coredns [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3] ...
	I0410 22:53:45.806619   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:45.846145   58701 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:53:45.846170   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0410 22:53:45.970766   58701 logs.go:123] Gathering logs for etcd [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072] ...
	I0410 22:53:45.970796   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:46.024049   58701 logs.go:123] Gathering logs for kube-scheduler [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14] ...
	I0410 22:53:46.024081   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:46.067009   58701 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:53:46.067048   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:53:46.462765   58701 logs.go:123] Gathering logs for kubelet ...
	I0410 22:53:46.462812   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:53:46.520007   58701 logs.go:123] Gathering logs for dmesg ...
	I0410 22:53:46.520049   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:53:49.047137   58701 system_pods.go:59] 8 kube-system pods found
	I0410 22:53:49.047166   58701 system_pods.go:61] "coredns-76f75df574-ghnvx" [88ebd9b0-ecf0-4037-b5b0-547dad2354ba] Running
	I0410 22:53:49.047170   58701 system_pods.go:61] "etcd-default-k8s-diff-port-519831" [e03c3075-b377-41a8-8f68-5a424fafd6a1] Running
	I0410 22:53:49.047174   58701 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-519831" [a538137b-08ce-4feb-a420-2e3ad7125b14] Running
	I0410 22:53:49.047177   58701 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-519831" [05d154e2-69cf-40dc-a4ba-ed0d65be4365] Running
	I0410 22:53:49.047180   58701 system_pods.go:61] "kube-proxy-5mbwx" [44724487-9539-4079-9fd6-40cb70208b95] Running
	I0410 22:53:49.047183   58701 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-519831" [a587ef20-140c-40b0-b306-d5f5c595f4a6] Running
	I0410 22:53:49.047189   58701 system_pods.go:61] "metrics-server-57f55c9bc5-9l2hc" [2f5cda2f-4d8f-4798-954e-5ef588f2b88f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:53:49.047192   58701 system_pods.go:61] "storage-provisioner" [e4e09f42-54ba-480e-a020-1ca071a54558] Running
	I0410 22:53:49.047201   58701 system_pods.go:74] duration metric: took 3.925495812s to wait for pod list to return data ...
	I0410 22:53:49.047208   58701 default_sa.go:34] waiting for default service account to be created ...
	I0410 22:53:49.050341   58701 default_sa.go:45] found service account: "default"
	I0410 22:53:49.050363   58701 default_sa.go:55] duration metric: took 3.148222ms for default service account to be created ...
	I0410 22:53:49.050371   58701 system_pods.go:116] waiting for k8s-apps to be running ...
	I0410 22:53:49.056364   58701 system_pods.go:86] 8 kube-system pods found
	I0410 22:53:49.056390   58701 system_pods.go:89] "coredns-76f75df574-ghnvx" [88ebd9b0-ecf0-4037-b5b0-547dad2354ba] Running
	I0410 22:53:49.056414   58701 system_pods.go:89] "etcd-default-k8s-diff-port-519831" [e03c3075-b377-41a8-8f68-5a424fafd6a1] Running
	I0410 22:53:49.056423   58701 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-519831" [a538137b-08ce-4feb-a420-2e3ad7125b14] Running
	I0410 22:53:49.056431   58701 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-519831" [05d154e2-69cf-40dc-a4ba-ed0d65be4365] Running
	I0410 22:53:49.056437   58701 system_pods.go:89] "kube-proxy-5mbwx" [44724487-9539-4079-9fd6-40cb70208b95] Running
	I0410 22:53:49.056444   58701 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-519831" [a587ef20-140c-40b0-b306-d5f5c595f4a6] Running
	I0410 22:53:49.056455   58701 system_pods.go:89] "metrics-server-57f55c9bc5-9l2hc" [2f5cda2f-4d8f-4798-954e-5ef588f2b88f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:53:49.056462   58701 system_pods.go:89] "storage-provisioner" [e4e09f42-54ba-480e-a020-1ca071a54558] Running
	I0410 22:53:49.056475   58701 system_pods.go:126] duration metric: took 6.097239ms to wait for k8s-apps to be running ...
	I0410 22:53:49.056492   58701 system_svc.go:44] waiting for kubelet service to be running ....
	I0410 22:53:49.056537   58701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:53:49.077239   58701 system_svc.go:56] duration metric: took 20.737127ms WaitForService to wait for kubelet
	I0410 22:53:49.077269   58701 kubeadm.go:576] duration metric: took 4m25.233626302s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 22:53:49.077297   58701 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:53:49.080463   58701 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:53:49.080486   58701 node_conditions.go:123] node cpu capacity is 2
	I0410 22:53:49.080497   58701 node_conditions.go:105] duration metric: took 3.195662ms to run NodePressure ...
	I0410 22:53:49.080508   58701 start.go:240] waiting for startup goroutines ...
	I0410 22:53:49.080515   58701 start.go:245] waiting for cluster config update ...
	I0410 22:53:49.080525   58701 start.go:254] writing updated cluster config ...
	I0410 22:53:49.080805   58701 ssh_runner.go:195] Run: rm -f paused
	I0410 22:53:49.141489   58701 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0410 22:53:49.143597   58701 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-519831" cluster and "default" namespace by default
	I0410 22:53:45.903632   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:48.403981   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:53.064071   58186 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0410 22:53:53.064154   58186 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 22:53:53.064260   58186 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 22:53:53.064429   58186 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 22:53:53.064574   58186 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 22:53:53.064670   58186 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 22:53:53.066595   58186 out.go:204]   - Generating certificates and keys ...
	I0410 22:53:53.066703   58186 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 22:53:53.066808   58186 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 22:53:53.066929   58186 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0410 22:53:53.067023   58186 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0410 22:53:53.067155   58186 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0410 22:53:53.067235   58186 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0410 22:53:53.067329   58186 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0410 22:53:53.067433   58186 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0410 22:53:53.067546   58186 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0410 22:53:53.067655   58186 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0410 22:53:53.067733   58186 kubeadm.go:309] [certs] Using the existing "sa" key
	I0410 22:53:53.067890   58186 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 22:53:53.067961   58186 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 22:53:53.068049   58186 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0410 22:53:53.068132   58186 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 22:53:53.068232   58186 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 22:53:53.068310   58186 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 22:53:53.068379   58186 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 22:53:53.068510   58186 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 22:53:53.070126   58186 out.go:204]   - Booting up control plane ...
	I0410 22:53:53.070219   58186 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 22:53:53.070324   58186 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 22:53:53.070425   58186 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 22:53:53.070565   58186 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 22:53:53.070686   58186 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 22:53:53.070748   58186 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 22:53:53.070973   58186 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0410 22:53:53.071083   58186 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002820 seconds
	I0410 22:53:53.071249   58186 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0410 22:53:53.071424   58186 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0410 22:53:53.071485   58186 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0410 22:53:53.071624   58186 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-706500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0410 22:53:53.071680   58186 kubeadm.go:309] [bootstrap-token] Using token: 0wvld6.jntz9ft9bn5g46le
	I0410 22:53:53.073567   58186 out.go:204]   - Configuring RBAC rules ...
	I0410 22:53:53.073708   58186 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0410 22:53:53.073819   58186 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0410 22:53:53.074015   58186 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0410 22:53:53.074206   58186 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0410 22:53:53.074370   58186 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0410 22:53:53.074548   58186 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0410 22:53:53.074726   58186 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0410 22:53:53.074798   58186 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0410 22:53:53.074873   58186 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0410 22:53:53.074884   58186 kubeadm.go:309] 
	I0410 22:53:53.074956   58186 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0410 22:53:53.074978   58186 kubeadm.go:309] 
	I0410 22:53:53.075077   58186 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0410 22:53:53.075088   58186 kubeadm.go:309] 
	I0410 22:53:53.075119   58186 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0410 22:53:53.075191   58186 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0410 22:53:53.075262   58186 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0410 22:53:53.075273   58186 kubeadm.go:309] 
	I0410 22:53:53.075337   58186 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0410 22:53:53.075353   58186 kubeadm.go:309] 
	I0410 22:53:53.075419   58186 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0410 22:53:53.075437   58186 kubeadm.go:309] 
	I0410 22:53:53.075503   58186 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0410 22:53:53.075621   58186 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0410 22:53:53.075714   58186 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0410 22:53:53.075724   58186 kubeadm.go:309] 
	I0410 22:53:53.075829   58186 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0410 22:53:53.075936   58186 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0410 22:53:53.075953   58186 kubeadm.go:309] 
	I0410 22:53:53.076058   58186 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 0wvld6.jntz9ft9bn5g46le \
	I0410 22:53:53.076196   58186 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f23510b5ac28f8f42919becc6f4945068a05a3fcf79470ca514b0af3de7a2bb0 \
	I0410 22:53:53.076253   58186 kubeadm.go:309] 	--control-plane 
	I0410 22:53:53.076270   58186 kubeadm.go:309] 
	I0410 22:53:53.076387   58186 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0410 22:53:53.076422   58186 kubeadm.go:309] 
	I0410 22:53:53.076516   58186 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 0wvld6.jntz9ft9bn5g46le \
	I0410 22:53:53.076661   58186 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f23510b5ac28f8f42919becc6f4945068a05a3fcf79470ca514b0af3de7a2bb0 
	I0410 22:53:53.076711   58186 cni.go:84] Creating CNI manager for ""
	I0410 22:53:53.076726   58186 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:53:53.078503   58186 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0410 22:53:50.902397   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:53.403449   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:53.079631   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0410 22:53:53.132043   58186 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0410 22:53:53.167760   58186 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0410 22:53:53.167847   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:53.167870   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-706500 minikube.k8s.io/updated_at=2024_04_10T22_53_53_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2 minikube.k8s.io/name=embed-certs-706500 minikube.k8s.io/primary=true
	I0410 22:53:53.511359   58186 ops.go:34] apiserver oom_adj: -16
	I0410 22:53:53.511506   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:54.012080   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:54.511816   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:55.011883   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:55.511809   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:56.011572   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:56.512114   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:57.011878   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:55.900548   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:57.901541   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:57.662444   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:53:57.662687   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:53:57.511726   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:58.011563   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:58.512617   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:59.012145   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:59.512448   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:00.012278   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:00.512290   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:01.012507   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:01.512415   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:02.011660   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:00.401622   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:54:02.902558   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:54:02.511581   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:03.012326   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:03.512539   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:04.012085   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:04.512496   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:05.011911   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:05.512180   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:05.619801   58186 kubeadm.go:1107] duration metric: took 12.452015223s to wait for elevateKubeSystemPrivileges
	W0410 22:54:05.619839   58186 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0410 22:54:05.619847   58186 kubeadm.go:393] duration metric: took 5m12.640298551s to StartCluster
	I0410 22:54:05.619862   58186 settings.go:142] acquiring lock: {Name:mk5dc8e9a07a91433645b19ffba859d70c73be71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:54:05.619936   58186 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:54:05.621989   58186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/kubeconfig: {Name:mkd6f498bec10d7b0d2291f83f5e27766227bc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:54:05.622331   58186 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0410 22:54:05.624233   58186 out.go:177] * Verifying Kubernetes components...
	I0410 22:54:05.622444   58186 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0410 22:54:05.622516   58186 config.go:182] Loaded profile config "embed-certs-706500": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:54:05.625850   58186 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-706500"
	I0410 22:54:05.625872   58186 addons.go:69] Setting default-storageclass=true in profile "embed-certs-706500"
	I0410 22:54:05.625882   58186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:54:05.625893   58186 addons.go:69] Setting metrics-server=true in profile "embed-certs-706500"
	I0410 22:54:05.625924   58186 addons.go:234] Setting addon metrics-server=true in "embed-certs-706500"
	W0410 22:54:05.625930   58186 addons.go:243] addon metrics-server should already be in state true
	I0410 22:54:05.625954   58186 host.go:66] Checking if "embed-certs-706500" exists ...
	I0410 22:54:05.625888   58186 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-706500"
	I0410 22:54:05.625903   58186 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-706500"
	W0410 22:54:05.625982   58186 addons.go:243] addon storage-provisioner should already be in state true
	I0410 22:54:05.626012   58186 host.go:66] Checking if "embed-certs-706500" exists ...
	I0410 22:54:05.626365   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.626407   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.626421   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.626440   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.626441   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.626442   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.643647   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44787
	I0410 22:54:05.643758   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41863
	I0410 22:54:05.644070   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45225
	I0410 22:54:05.644101   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.644253   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.644825   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.644856   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.644825   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.644883   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.644915   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.645239   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.645419   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetState
	I0410 22:54:05.645475   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.645489   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.645501   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.646021   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.646035   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.646062   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.646588   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.646619   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.648242   58186 addons.go:234] Setting addon default-storageclass=true in "embed-certs-706500"
	W0410 22:54:05.648261   58186 addons.go:243] addon default-storageclass should already be in state true
	I0410 22:54:05.648282   58186 host.go:66] Checking if "embed-certs-706500" exists ...
	I0410 22:54:05.648555   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.648582   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.661773   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37117
	I0410 22:54:05.662556   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.663049   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.663073   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.663474   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.663708   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetState
	I0410 22:54:05.664716   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44297
	I0410 22:54:05.665027   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.665617   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.665634   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.665706   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46269
	I0410 22:54:05.666342   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.666343   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:54:05.665946   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.668790   58186 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:54:05.667015   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.667244   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.670336   58186 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:54:05.670357   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0410 22:54:05.670374   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:54:05.668826   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.668843   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.671350   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.671633   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetState
	I0410 22:54:05.673653   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:54:05.675310   58186 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0410 22:54:05.674011   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.674533   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:54:05.676671   58186 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0410 22:54:05.676677   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:54:05.676690   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0410 22:54:05.676710   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:54:05.676713   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.676821   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:54:05.676976   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:54:05.677117   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:54:05.680146   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.680927   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:54:05.680964   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.681136   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:54:05.681515   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:54:05.681681   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:54:05.681834   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:54:05.688424   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43155
	I0410 22:54:05.688861   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.689299   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.689320   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.689589   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.689741   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetState
	I0410 22:54:05.691090   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:54:05.691335   58186 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0410 22:54:05.691353   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0410 22:54:05.691369   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:54:05.694552   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.695080   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:54:05.695118   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.695426   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:54:05.695771   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:54:05.695939   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:54:05.696084   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:54:05.860032   58186 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:54:05.881036   58186 node_ready.go:35] waiting up to 6m0s for node "embed-certs-706500" to be "Ready" ...
	I0410 22:54:05.891218   58186 node_ready.go:49] node "embed-certs-706500" has status "Ready":"True"
	I0410 22:54:05.891237   58186 node_ready.go:38] duration metric: took 10.166143ms for node "embed-certs-706500" to be "Ready" ...
	I0410 22:54:05.891247   58186 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:54:05.899013   58186 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-bvdp5" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:06.064031   58186 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0410 22:54:06.064051   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0410 22:54:06.065727   58186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:54:06.075127   58186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0410 22:54:06.140574   58186 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0410 22:54:06.140607   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0410 22:54:06.216389   58186 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:54:06.216428   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0410 22:54:06.356117   58186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:54:07.409983   58186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.334826611s)
	I0410 22:54:07.410039   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.410052   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.410103   58186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.344342448s)
	I0410 22:54:07.410184   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.410199   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.410313   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Closing plugin on server side
	I0410 22:54:07.410321   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.410362   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.410371   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.410382   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.410452   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.410505   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.410519   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.410531   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.410465   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Closing plugin on server side
	I0410 22:54:07.410678   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.410765   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.410802   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.410820   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.410822   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Closing plugin on server side
	I0410 22:54:07.438723   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.438742   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.439085   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.439104   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.439085   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Closing plugin on server side
	I0410 22:54:07.738187   58186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.382031326s)
	I0410 22:54:07.738252   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.738267   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.738556   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.738586   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.738597   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.738604   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.738865   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.738885   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.738908   58186 addons.go:470] Verifying addon metrics-server=true in "embed-certs-706500"
	I0410 22:54:07.741639   58186 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0410 22:54:05.403374   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:54:07.903041   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:54:08.895154   57270 pod_ready.go:81] duration metric: took 4m0.000708165s for pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace to be "Ready" ...
	E0410 22:54:08.895186   57270 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace to be "Ready" (will not retry!)
	I0410 22:54:08.895214   57270 pod_ready.go:38] duration metric: took 4m14.550044852s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:54:08.895246   57270 kubeadm.go:591] duration metric: took 4m22.444968141s to restartPrimaryControlPlane
	W0410 22:54:08.895308   57270 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0410 22:54:08.895339   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0410 22:54:07.742954   58186 addons.go:505] duration metric: took 2.120520274s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0410 22:54:07.910203   58186 pod_ready.go:102] pod "coredns-76f75df574-bvdp5" in "kube-system" namespace has status "Ready":"False"
	I0410 22:54:08.906369   58186 pod_ready.go:92] pod "coredns-76f75df574-bvdp5" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:08.906394   58186 pod_ready.go:81] duration metric: took 3.007348288s for pod "coredns-76f75df574-bvdp5" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.906407   58186 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-v2pp5" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.913564   58186 pod_ready.go:92] pod "coredns-76f75df574-v2pp5" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:08.913582   58186 pod_ready.go:81] duration metric: took 7.168463ms for pod "coredns-76f75df574-v2pp5" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.913592   58186 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.919270   58186 pod_ready.go:92] pod "etcd-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:08.919296   58186 pod_ready.go:81] duration metric: took 5.696297ms for pod "etcd-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.919308   58186 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.924389   58186 pod_ready.go:92] pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:08.924430   58186 pod_ready.go:81] duration metric: took 5.111624ms for pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.924443   58186 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.929296   58186 pod_ready.go:92] pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:08.929320   58186 pod_ready.go:81] duration metric: took 4.869073ms for pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.929333   58186 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xj5nq" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:09.305730   58186 pod_ready.go:92] pod "kube-proxy-xj5nq" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:09.305756   58186 pod_ready.go:81] duration metric: took 376.415901ms for pod "kube-proxy-xj5nq" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:09.305770   58186 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:09.703841   58186 pod_ready.go:92] pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:09.703869   58186 pod_ready.go:81] duration metric: took 398.090582ms for pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:09.703881   58186 pod_ready.go:38] duration metric: took 3.812625835s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:54:09.703898   58186 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:54:09.703957   58186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:54:09.720728   58186 api_server.go:72] duration metric: took 4.098354983s to wait for apiserver process to appear ...
	I0410 22:54:09.720763   58186 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:54:09.720786   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:54:09.726522   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I0410 22:54:09.727951   58186 api_server.go:141] control plane version: v1.29.3
	I0410 22:54:09.727979   58186 api_server.go:131] duration metric: took 7.20731ms to wait for apiserver health ...
	I0410 22:54:09.727989   58186 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:54:09.908166   58186 system_pods.go:59] 9 kube-system pods found
	I0410 22:54:09.908203   58186 system_pods.go:61] "coredns-76f75df574-bvdp5" [1cc8a326-77ef-469f-abf7-082ff8a44782] Running
	I0410 22:54:09.908212   58186 system_pods.go:61] "coredns-76f75df574-v2pp5" [2138fb5e-9c16-4a25-85d3-3d84b361a1e8] Running
	I0410 22:54:09.908217   58186 system_pods.go:61] "etcd-embed-certs-706500" [4a4b25f6-f8b7-49a2-9dfb-74d480775de7] Running
	I0410 22:54:09.908222   58186 system_pods.go:61] "kube-apiserver-embed-certs-706500" [33bf3126-e3fa-49f8-829d-8fb5ab407062] Running
	I0410 22:54:09.908227   58186 system_pods.go:61] "kube-controller-manager-embed-certs-706500" [97ca8487-eb31-43f8-ab20-873a134bdcad] Running
	I0410 22:54:09.908232   58186 system_pods.go:61] "kube-proxy-xj5nq" [c1bb1878-3e4b-4647-a3a7-cb327ccbd364] Running
	I0410 22:54:09.908236   58186 system_pods.go:61] "kube-scheduler-embed-certs-706500" [977f178e-11a1-46a9-87a1-04a5a915c267] Running
	I0410 22:54:09.908246   58186 system_pods.go:61] "metrics-server-57f55c9bc5-9mrmz" [a4ccd29a-d27e-4291-ac8c-3135d65f8a2a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:54:09.908251   58186 system_pods.go:61] "storage-provisioner" [8ad8e533-69ca-4eb5-9595-e6808dc0ff1a] Running
	I0410 22:54:09.908263   58186 system_pods.go:74] duration metric: took 180.267138ms to wait for pod list to return data ...
	I0410 22:54:09.908276   58186 default_sa.go:34] waiting for default service account to be created ...
	I0410 22:54:10.103556   58186 default_sa.go:45] found service account: "default"
	I0410 22:54:10.103586   58186 default_sa.go:55] duration metric: took 195.301798ms for default service account to be created ...
	I0410 22:54:10.103597   58186 system_pods.go:116] waiting for k8s-apps to be running ...
	I0410 22:54:10.309537   58186 system_pods.go:86] 9 kube-system pods found
	I0410 22:54:10.309566   58186 system_pods.go:89] "coredns-76f75df574-bvdp5" [1cc8a326-77ef-469f-abf7-082ff8a44782] Running
	I0410 22:54:10.309572   58186 system_pods.go:89] "coredns-76f75df574-v2pp5" [2138fb5e-9c16-4a25-85d3-3d84b361a1e8] Running
	I0410 22:54:10.309578   58186 system_pods.go:89] "etcd-embed-certs-706500" [4a4b25f6-f8b7-49a2-9dfb-74d480775de7] Running
	I0410 22:54:10.309583   58186 system_pods.go:89] "kube-apiserver-embed-certs-706500" [33bf3126-e3fa-49f8-829d-8fb5ab407062] Running
	I0410 22:54:10.309588   58186 system_pods.go:89] "kube-controller-manager-embed-certs-706500" [97ca8487-eb31-43f8-ab20-873a134bdcad] Running
	I0410 22:54:10.309592   58186 system_pods.go:89] "kube-proxy-xj5nq" [c1bb1878-3e4b-4647-a3a7-cb327ccbd364] Running
	I0410 22:54:10.309596   58186 system_pods.go:89] "kube-scheduler-embed-certs-706500" [977f178e-11a1-46a9-87a1-04a5a915c267] Running
	I0410 22:54:10.309602   58186 system_pods.go:89] "metrics-server-57f55c9bc5-9mrmz" [a4ccd29a-d27e-4291-ac8c-3135d65f8a2a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:54:10.309607   58186 system_pods.go:89] "storage-provisioner" [8ad8e533-69ca-4eb5-9595-e6808dc0ff1a] Running
	I0410 22:54:10.309617   58186 system_pods.go:126] duration metric: took 206.014442ms to wait for k8s-apps to be running ...
	I0410 22:54:10.309624   58186 system_svc.go:44] waiting for kubelet service to be running ....
	I0410 22:54:10.309666   58186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:54:10.324614   58186 system_svc.go:56] duration metric: took 14.97975ms WaitForService to wait for kubelet
	I0410 22:54:10.324651   58186 kubeadm.go:576] duration metric: took 4.702277594s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 22:54:10.324669   58186 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:54:10.503911   58186 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:54:10.503939   58186 node_conditions.go:123] node cpu capacity is 2
	I0410 22:54:10.503949   58186 node_conditions.go:105] duration metric: took 179.27538ms to run NodePressure ...
	I0410 22:54:10.503959   58186 start.go:240] waiting for startup goroutines ...
	I0410 22:54:10.503966   58186 start.go:245] waiting for cluster config update ...
	I0410 22:54:10.503975   58186 start.go:254] writing updated cluster config ...
	I0410 22:54:10.504242   58186 ssh_runner.go:195] Run: rm -f paused
	I0410 22:54:10.555500   58186 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0410 22:54:10.557941   58186 out.go:177] * Done! kubectl is now configured to use "embed-certs-706500" cluster and "default" namespace by default
	I0410 22:54:37.664290   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:54:37.664604   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:54:37.664634   57719 kubeadm.go:309] 
	I0410 22:54:37.664776   57719 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0410 22:54:37.664843   57719 kubeadm.go:309] 		timed out waiting for the condition
	I0410 22:54:37.664854   57719 kubeadm.go:309] 
	I0410 22:54:37.664901   57719 kubeadm.go:309] 	This error is likely caused by:
	I0410 22:54:37.664968   57719 kubeadm.go:309] 		- The kubelet is not running
	I0410 22:54:37.665086   57719 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0410 22:54:37.665101   57719 kubeadm.go:309] 
	I0410 22:54:37.665245   57719 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0410 22:54:37.665313   57719 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0410 22:54:37.665360   57719 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0410 22:54:37.665372   57719 kubeadm.go:309] 
	I0410 22:54:37.665579   57719 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0410 22:54:37.665695   57719 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0410 22:54:37.665707   57719 kubeadm.go:309] 
	I0410 22:54:37.665868   57719 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0410 22:54:37.666063   57719 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0410 22:54:37.666192   57719 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0410 22:54:37.666272   57719 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0410 22:54:37.666284   57719 kubeadm.go:309] 
	I0410 22:54:37.667202   57719 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 22:54:37.667329   57719 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0410 22:54:37.667420   57719 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0410 22:54:37.667555   57719 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0410 22:54:37.667623   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0410 22:54:40.975782   57270 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.080419546s)
	I0410 22:54:40.975854   57270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:54:40.993677   57270 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:54:41.006185   57270 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:54:41.016820   57270 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:54:41.016850   57270 kubeadm.go:156] found existing configuration files:
	
	I0410 22:54:41.016985   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:54:41.026802   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:54:41.026871   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:54:41.036992   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:54:41.046896   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:54:41.046962   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:54:41.057184   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:54:41.067261   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:54:41.067321   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:54:41.077846   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:54:41.087745   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:54:41.087795   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:54:41.098660   57270 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 22:54:41.159736   57270 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-rc.1
	I0410 22:54:41.159807   57270 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 22:54:41.316137   57270 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 22:54:41.316279   57270 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 22:54:41.316446   57270 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 22:54:41.559720   57270 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 22:54:41.561946   57270 out.go:204]   - Generating certificates and keys ...
	I0410 22:54:41.562039   57270 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 22:54:41.562141   57270 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 22:54:41.562211   57270 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0410 22:54:41.562275   57270 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0410 22:54:41.562352   57270 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0410 22:54:41.562460   57270 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0410 22:54:41.562572   57270 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0410 22:54:41.562667   57270 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0410 22:54:41.562803   57270 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0410 22:54:41.562917   57270 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0410 22:54:41.562992   57270 kubeadm.go:309] [certs] Using the existing "sa" key
	I0410 22:54:41.563081   57270 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 22:54:41.723729   57270 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 22:54:41.834274   57270 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0410 22:54:41.936758   57270 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 22:54:42.038298   57270 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 22:54:42.229459   57270 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 22:54:42.230047   57270 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 22:54:42.233021   57270 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 22:54:42.236068   57270 out.go:204]   - Booting up control plane ...
	I0410 22:54:42.236197   57270 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 22:54:42.236303   57270 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 22:54:42.236421   57270 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 22:54:42.255487   57270 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 22:54:42.256345   57270 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 22:54:42.256450   57270 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 22:54:42.391623   57270 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0410 22:54:42.391736   57270 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0410 22:54:43.393825   57270 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.00265832s
	I0410 22:54:43.393973   57270 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0410 22:54:43.156141   57719 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.488487447s)
	I0410 22:54:43.156227   57719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:54:43.170709   57719 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:54:43.180624   57719 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:54:43.180647   57719 kubeadm.go:156] found existing configuration files:
	
	I0410 22:54:43.180701   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:54:43.190482   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:54:43.190533   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:54:43.200261   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:54:43.210061   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:54:43.210116   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:54:43.220430   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:54:43.230810   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:54:43.230877   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:54:43.241141   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:54:43.251043   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:54:43.251111   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:54:43.261163   57719 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 22:54:43.534002   57719 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 22:54:48.398196   57270 kubeadm.go:309] [api-check] The API server is healthy after 5.002218646s
	I0410 22:54:48.410618   57270 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0410 22:54:48.430553   57270 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0410 22:54:48.465343   57270 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0410 22:54:48.465614   57270 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-646133 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0410 22:54:48.489066   57270 kubeadm.go:309] [bootstrap-token] Using token: 14xwwp.uyth37qsjfn0mpcx
	I0410 22:54:48.490984   57270 out.go:204]   - Configuring RBAC rules ...
	I0410 22:54:48.491116   57270 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0410 22:54:48.502789   57270 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0410 22:54:48.516871   57270 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0410 22:54:48.523600   57270 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0410 22:54:48.527939   57270 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0410 22:54:48.537216   57270 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0410 22:54:48.806350   57270 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0410 22:54:49.234618   57270 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0410 22:54:49.803640   57270 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0410 22:54:49.804948   57270 kubeadm.go:309] 
	I0410 22:54:49.805074   57270 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0410 22:54:49.805095   57270 kubeadm.go:309] 
	I0410 22:54:49.805194   57270 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0410 22:54:49.805209   57270 kubeadm.go:309] 
	I0410 22:54:49.805240   57270 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0410 22:54:49.805323   57270 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0410 22:54:49.805403   57270 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0410 22:54:49.805415   57270 kubeadm.go:309] 
	I0410 22:54:49.805482   57270 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0410 22:54:49.805489   57270 kubeadm.go:309] 
	I0410 22:54:49.805562   57270 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0410 22:54:49.805580   57270 kubeadm.go:309] 
	I0410 22:54:49.805646   57270 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0410 22:54:49.805781   57270 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0410 22:54:49.805888   57270 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0410 22:54:49.805901   57270 kubeadm.go:309] 
	I0410 22:54:49.806038   57270 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0410 22:54:49.806143   57270 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0410 22:54:49.806154   57270 kubeadm.go:309] 
	I0410 22:54:49.806262   57270 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 14xwwp.uyth37qsjfn0mpcx \
	I0410 22:54:49.806398   57270 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f23510b5ac28f8f42919becc6f4945068a05a3fcf79470ca514b0af3de7a2bb0 \
	I0410 22:54:49.806438   57270 kubeadm.go:309] 	--control-plane 
	I0410 22:54:49.806456   57270 kubeadm.go:309] 
	I0410 22:54:49.806565   57270 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0410 22:54:49.806581   57270 kubeadm.go:309] 
	I0410 22:54:49.806661   57270 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 14xwwp.uyth37qsjfn0mpcx \
	I0410 22:54:49.806777   57270 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f23510b5ac28f8f42919becc6f4945068a05a3fcf79470ca514b0af3de7a2bb0 
	I0410 22:54:49.808385   57270 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 22:54:49.808455   57270 cni.go:84] Creating CNI manager for ""
	I0410 22:54:49.808473   57270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:54:49.811276   57270 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0410 22:54:49.812840   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0410 22:54:49.829865   57270 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0410 22:54:49.854383   57270 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0410 22:54:49.854454   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:49.854456   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-646133 minikube.k8s.io/updated_at=2024_04_10T22_54_49_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2 minikube.k8s.io/name=no-preload-646133 minikube.k8s.io/primary=true
	I0410 22:54:49.888254   57270 ops.go:34] apiserver oom_adj: -16
	I0410 22:54:50.073922   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:50.574248   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:51.074134   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:51.574654   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:52.074970   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:52.574248   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:53.074799   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:53.574902   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:54.074695   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:54.574038   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:55.074975   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:55.574297   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:56.074490   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:56.574490   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:57.074280   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:57.574569   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:58.074654   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:58.574740   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:59.074630   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:59.574546   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:00.075044   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:00.574740   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:01.074961   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:01.574004   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:02.074121   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:02.574476   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:02.705604   57270 kubeadm.go:1107] duration metric: took 12.851213125s to wait for elevateKubeSystemPrivileges
	W0410 22:55:02.705636   57270 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0410 22:55:02.705644   57270 kubeadm.go:393] duration metric: took 5m16.306442396s to StartCluster
	I0410 22:55:02.705660   57270 settings.go:142] acquiring lock: {Name:mk5dc8e9a07a91433645b19ffba859d70c73be71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:55:02.705739   57270 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:55:02.707592   57270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/kubeconfig: {Name:mkd6f498bec10d7b0d2291f83f5e27766227bc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:55:02.707844   57270 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.17 Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0410 22:55:02.709479   57270 out.go:177] * Verifying Kubernetes components...
	I0410 22:55:02.707944   57270 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0410 22:55:02.708074   57270 config.go:182] Loaded profile config "no-preload-646133": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.1
	I0410 22:55:02.710816   57270 addons.go:69] Setting storage-provisioner=true in profile "no-preload-646133"
	I0410 22:55:02.710827   57270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:55:02.710854   57270 addons.go:234] Setting addon storage-provisioner=true in "no-preload-646133"
	W0410 22:55:02.710865   57270 addons.go:243] addon storage-provisioner should already be in state true
	I0410 22:55:02.710889   57270 host.go:66] Checking if "no-preload-646133" exists ...
	I0410 22:55:02.710819   57270 addons.go:69] Setting default-storageclass=true in profile "no-preload-646133"
	I0410 22:55:02.710975   57270 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-646133"
	I0410 22:55:02.710821   57270 addons.go:69] Setting metrics-server=true in profile "no-preload-646133"
	I0410 22:55:02.711079   57270 addons.go:234] Setting addon metrics-server=true in "no-preload-646133"
	W0410 22:55:02.711090   57270 addons.go:243] addon metrics-server should already be in state true
	I0410 22:55:02.711119   57270 host.go:66] Checking if "no-preload-646133" exists ...
	I0410 22:55:02.711325   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.711349   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.711352   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.711382   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.711486   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.711507   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.729696   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44631
	I0410 22:55:02.730179   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.730725   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.730751   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.731138   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35903
	I0410 22:55:02.731161   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43589
	I0410 22:55:02.731223   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.731532   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.731551   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.731920   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.731951   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.732083   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.732103   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.732266   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.732290   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.732642   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.732692   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.732892   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetState
	I0410 22:55:02.733291   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.733336   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.737245   57270 addons.go:234] Setting addon default-storageclass=true in "no-preload-646133"
	W0410 22:55:02.737274   57270 addons.go:243] addon default-storageclass should already be in state true
	I0410 22:55:02.737304   57270 host.go:66] Checking if "no-preload-646133" exists ...
	I0410 22:55:02.737674   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.737710   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.749656   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40775
	I0410 22:55:02.750133   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.751030   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.751054   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.751467   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.751642   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetState
	I0410 22:55:02.752548   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37217
	I0410 22:55:02.753119   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.753727   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:55:02.753903   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.753918   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.755963   57270 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0410 22:55:02.754443   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.757499   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42427
	I0410 22:55:02.757548   57270 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0410 22:55:02.757559   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0410 22:55:02.757576   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:55:02.757684   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetState
	I0410 22:55:02.758428   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.758880   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.758893   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.759783   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.760197   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.760224   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.760379   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:55:02.762291   57270 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:55:02.761210   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.761741   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:55:02.763819   57270 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:55:02.763907   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0410 22:55:02.763918   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:55:02.763841   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:55:02.763963   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.764040   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:55:02.764153   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:55:02.764239   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:55:02.767729   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.767758   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:55:02.767776   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.767730   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:55:02.767951   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:55:02.768100   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:55:02.768223   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:55:02.782788   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39421
	I0410 22:55:02.783161   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.783701   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.783726   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.784081   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.784347   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetState
	I0410 22:55:02.785932   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:55:02.786186   57270 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0410 22:55:02.786200   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0410 22:55:02.786217   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:55:02.789193   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.789526   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:55:02.789576   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.789837   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:55:02.790096   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:55:02.790278   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:55:02.790431   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:55:02.922239   57270 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:55:02.957665   57270 node_ready.go:35] waiting up to 6m0s for node "no-preload-646133" to be "Ready" ...
	I0410 22:55:02.981427   57270 node_ready.go:49] node "no-preload-646133" has status "Ready":"True"
	I0410 22:55:02.981449   57270 node_ready.go:38] duration metric: took 23.75134ms for node "no-preload-646133" to be "Ready" ...
	I0410 22:55:02.981458   57270 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:55:02.986557   57270 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jm2zw" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:03.024992   57270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:55:03.032744   57270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0410 22:55:03.156968   57270 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0410 22:55:03.156989   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0410 22:55:03.237497   57270 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0410 22:55:03.237522   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0410 22:55:03.274982   57270 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:55:03.275005   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0410 22:55:03.317464   57270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:55:03.512107   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.512130   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.512173   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.512198   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.512435   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.512455   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.512525   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.512530   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.512541   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.512542   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.512538   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.512551   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.512558   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.512497   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.512782   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.512799   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.512876   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.512915   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.512878   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.525688   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.525707   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.526017   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.526042   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.526057   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.905597   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.905627   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.906016   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.906081   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.906089   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.906101   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.906107   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.906353   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.906355   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.906381   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.906392   57270 addons.go:470] Verifying addon metrics-server=true in "no-preload-646133"
	I0410 22:55:03.908467   57270 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0410 22:55:03.910238   57270 addons.go:505] duration metric: took 1.20230017s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0410 22:55:05.035855   57270 pod_ready.go:102] pod "coredns-7db6d8ff4d-jm2zw" in "kube-system" namespace has status "Ready":"False"
	I0410 22:55:05.493330   57270 pod_ready.go:92] pod "coredns-7db6d8ff4d-jm2zw" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.493354   57270 pod_ready.go:81] duration metric: took 2.506773848s for pod "coredns-7db6d8ff4d-jm2zw" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.493365   57270 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-v599p" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.498568   57270 pod_ready.go:92] pod "coredns-7db6d8ff4d-v599p" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.498593   57270 pod_ready.go:81] duration metric: took 5.220548ms for pod "coredns-7db6d8ff4d-v599p" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.498604   57270 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.505133   57270 pod_ready.go:92] pod "etcd-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.505156   57270 pod_ready.go:81] duration metric: took 6.544104ms for pod "etcd-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.505165   57270 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.510391   57270 pod_ready.go:92] pod "kube-apiserver-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.510415   57270 pod_ready.go:81] duration metric: took 5.2417ms for pod "kube-apiserver-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.510427   57270 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.524717   57270 pod_ready.go:92] pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.524737   57270 pod_ready.go:81] duration metric: took 14.302445ms for pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.524747   57270 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-24vhc" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.891005   57270 pod_ready.go:92] pod "kube-proxy-24vhc" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.891029   57270 pod_ready.go:81] duration metric: took 366.275947ms for pod "kube-proxy-24vhc" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.891039   57270 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:06.291050   57270 pod_ready.go:92] pod "kube-scheduler-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:06.291075   57270 pod_ready.go:81] duration metric: took 400.028808ms for pod "kube-scheduler-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:06.291084   57270 pod_ready.go:38] duration metric: took 3.309617471s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:55:06.291101   57270 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:55:06.291165   57270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:55:06.308433   57270 api_server.go:72] duration metric: took 3.600549626s to wait for apiserver process to appear ...
	I0410 22:55:06.308461   57270 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:55:06.308479   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:55:06.312630   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 200:
	ok
	I0410 22:55:06.313434   57270 api_server.go:141] control plane version: v1.30.0-rc.1
	I0410 22:55:06.313457   57270 api_server.go:131] duration metric: took 4.989017ms to wait for apiserver health ...
	I0410 22:55:06.313466   57270 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:55:06.494780   57270 system_pods.go:59] 9 kube-system pods found
	I0410 22:55:06.494813   57270 system_pods.go:61] "coredns-7db6d8ff4d-jm2zw" [9d8b995c-717e-43a5-a963-f07a4f7a76a8] Running
	I0410 22:55:06.494820   57270 system_pods.go:61] "coredns-7db6d8ff4d-v599p" [f30c2827-5930-41d4-82b7-edfb839b3a74] Running
	I0410 22:55:06.494826   57270 system_pods.go:61] "etcd-no-preload-646133" [43f97c7f-c75c-4af4-80c1-11194210d8dd] Running
	I0410 22:55:06.494833   57270 system_pods.go:61] "kube-apiserver-no-preload-646133" [ca38242e-c714-49f7-a2df-3f26c6c37d44] Running
	I0410 22:55:06.494838   57270 system_pods.go:61] "kube-controller-manager-no-preload-646133" [a4c79943-eacf-46a5-b57a-f262c7dc97ef] Running
	I0410 22:55:06.494843   57270 system_pods.go:61] "kube-proxy-24vhc" [ca175e85-76f2-47d2-91a5-0248194a88e8] Running
	I0410 22:55:06.494848   57270 system_pods.go:61] "kube-scheduler-no-preload-646133" [fb5f38f5-0c9d-4176-8b3e-4d8c5f71c5cf] Running
	I0410 22:55:06.494856   57270 system_pods.go:61] "metrics-server-569cc877fc-bj59f" [4aace435-90be-456a-8a85-dbee0026212c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:55:06.494862   57270 system_pods.go:61] "storage-provisioner" [3232daa9-da88-4152-97c8-e86b3d50b0b8] Running
	I0410 22:55:06.494871   57270 system_pods.go:74] duration metric: took 181.399385ms to wait for pod list to return data ...
	I0410 22:55:06.494890   57270 default_sa.go:34] waiting for default service account to be created ...
	I0410 22:55:06.690158   57270 default_sa.go:45] found service account: "default"
	I0410 22:55:06.690185   57270 default_sa.go:55] duration metric: took 195.289153ms for default service account to be created ...
	I0410 22:55:06.690194   57270 system_pods.go:116] waiting for k8s-apps to be running ...
	I0410 22:55:06.893604   57270 system_pods.go:86] 9 kube-system pods found
	I0410 22:55:06.893632   57270 system_pods.go:89] "coredns-7db6d8ff4d-jm2zw" [9d8b995c-717e-43a5-a963-f07a4f7a76a8] Running
	I0410 22:55:06.893638   57270 system_pods.go:89] "coredns-7db6d8ff4d-v599p" [f30c2827-5930-41d4-82b7-edfb839b3a74] Running
	I0410 22:55:06.893642   57270 system_pods.go:89] "etcd-no-preload-646133" [43f97c7f-c75c-4af4-80c1-11194210d8dd] Running
	I0410 22:55:06.893646   57270 system_pods.go:89] "kube-apiserver-no-preload-646133" [ca38242e-c714-49f7-a2df-3f26c6c37d44] Running
	I0410 22:55:06.893651   57270 system_pods.go:89] "kube-controller-manager-no-preload-646133" [a4c79943-eacf-46a5-b57a-f262c7dc97ef] Running
	I0410 22:55:06.893656   57270 system_pods.go:89] "kube-proxy-24vhc" [ca175e85-76f2-47d2-91a5-0248194a88e8] Running
	I0410 22:55:06.893659   57270 system_pods.go:89] "kube-scheduler-no-preload-646133" [fb5f38f5-0c9d-4176-8b3e-4d8c5f71c5cf] Running
	I0410 22:55:06.893665   57270 system_pods.go:89] "metrics-server-569cc877fc-bj59f" [4aace435-90be-456a-8a85-dbee0026212c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:55:06.893670   57270 system_pods.go:89] "storage-provisioner" [3232daa9-da88-4152-97c8-e86b3d50b0b8] Running
	I0410 22:55:06.893679   57270 system_pods.go:126] duration metric: took 203.480657ms to wait for k8s-apps to be running ...
	I0410 22:55:06.893686   57270 system_svc.go:44] waiting for kubelet service to be running ....
	I0410 22:55:06.893730   57270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:55:06.909072   57270 system_svc.go:56] duration metric: took 15.374403ms WaitForService to wait for kubelet
	I0410 22:55:06.909096   57270 kubeadm.go:576] duration metric: took 4.20122533s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 22:55:06.909115   57270 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:55:07.090651   57270 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:55:07.090673   57270 node_conditions.go:123] node cpu capacity is 2
	I0410 22:55:07.090682   57270 node_conditions.go:105] duration metric: took 181.563241ms to run NodePressure ...
	I0410 22:55:07.090692   57270 start.go:240] waiting for startup goroutines ...
	I0410 22:55:07.090698   57270 start.go:245] waiting for cluster config update ...
	I0410 22:55:07.090707   57270 start.go:254] writing updated cluster config ...
	I0410 22:55:07.090957   57270 ssh_runner.go:195] Run: rm -f paused
	I0410 22:55:07.140644   57270 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.1 (minor skew: 1)
	I0410 22:55:07.142770   57270 out.go:177] * Done! kubectl is now configured to use "no-preload-646133" cluster and "default" namespace by default
	I0410 22:56:40.435994   57719 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0410 22:56:40.436123   57719 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0410 22:56:40.437810   57719 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0410 22:56:40.437872   57719 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 22:56:40.437967   57719 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 22:56:40.438082   57719 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 22:56:40.438235   57719 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 22:56:40.438321   57719 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 22:56:40.440009   57719 out.go:204]   - Generating certificates and keys ...
	I0410 22:56:40.440110   57719 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 22:56:40.440210   57719 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 22:56:40.440336   57719 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0410 22:56:40.440417   57719 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0410 22:56:40.440501   57719 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0410 22:56:40.440563   57719 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0410 22:56:40.440622   57719 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0410 22:56:40.440685   57719 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0410 22:56:40.440752   57719 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0410 22:56:40.440858   57719 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0410 22:56:40.440923   57719 kubeadm.go:309] [certs] Using the existing "sa" key
	I0410 22:56:40.441004   57719 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 22:56:40.441076   57719 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 22:56:40.441131   57719 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 22:56:40.441185   57719 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 22:56:40.441242   57719 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 22:56:40.441375   57719 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 22:56:40.441501   57719 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 22:56:40.441565   57719 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 22:56:40.441658   57719 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 22:56:40.443122   57719 out.go:204]   - Booting up control plane ...
	I0410 22:56:40.443230   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 22:56:40.443332   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 22:56:40.443431   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 22:56:40.443549   57719 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 22:56:40.443710   57719 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0410 22:56:40.443783   57719 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0410 22:56:40.443883   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.444111   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.444200   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.444429   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.444520   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.444761   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.444869   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.445124   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.445235   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.445416   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.445423   57719 kubeadm.go:309] 
	I0410 22:56:40.445465   57719 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0410 22:56:40.445512   57719 kubeadm.go:309] 		timed out waiting for the condition
	I0410 22:56:40.445520   57719 kubeadm.go:309] 
	I0410 22:56:40.445548   57719 kubeadm.go:309] 	This error is likely caused by:
	I0410 22:56:40.445595   57719 kubeadm.go:309] 		- The kubelet is not running
	I0410 22:56:40.445712   57719 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0410 22:56:40.445722   57719 kubeadm.go:309] 
	I0410 22:56:40.445880   57719 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0410 22:56:40.445931   57719 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0410 22:56:40.445967   57719 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0410 22:56:40.445972   57719 kubeadm.go:309] 
	I0410 22:56:40.446095   57719 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0410 22:56:40.446190   57719 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0410 22:56:40.446201   57719 kubeadm.go:309] 
	I0410 22:56:40.446326   57719 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0410 22:56:40.446452   57719 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0410 22:56:40.446548   57719 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0410 22:56:40.446611   57719 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0410 22:56:40.446659   57719 kubeadm.go:309] 
	I0410 22:56:40.446681   57719 kubeadm.go:393] duration metric: took 8m5.163157284s to StartCluster
	I0410 22:56:40.446805   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:56:40.446880   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:56:40.499163   57719 cri.go:89] found id: ""
	I0410 22:56:40.499196   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.499205   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:56:40.499212   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:56:40.499292   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:56:40.545429   57719 cri.go:89] found id: ""
	I0410 22:56:40.545465   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.545473   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:56:40.545479   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:56:40.545538   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:56:40.583842   57719 cri.go:89] found id: ""
	I0410 22:56:40.583870   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.583880   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:56:40.583887   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:56:40.583957   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:56:40.621054   57719 cri.go:89] found id: ""
	I0410 22:56:40.621075   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.621083   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:56:40.621091   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:56:40.621149   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:56:40.665133   57719 cri.go:89] found id: ""
	I0410 22:56:40.665161   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.665168   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:56:40.665175   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:56:40.665231   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:56:40.707490   57719 cri.go:89] found id: ""
	I0410 22:56:40.707519   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.707529   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:56:40.707536   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:56:40.707598   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:56:40.748539   57719 cri.go:89] found id: ""
	I0410 22:56:40.748565   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.748576   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:56:40.748584   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:56:40.748644   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:56:40.792326   57719 cri.go:89] found id: ""
	I0410 22:56:40.792349   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.792358   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:56:40.792366   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:56:40.792376   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:56:40.844309   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:56:40.844346   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:56:40.859678   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:56:40.859715   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:56:40.950099   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:56:40.950123   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:56:40.950141   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:56:41.073547   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:56:41.073589   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0410 22:56:41.124970   57719 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0410 22:56:41.125024   57719 out.go:239] * 
	W0410 22:56:41.125096   57719 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0410 22:56:41.125129   57719 out.go:239] * 
	W0410 22:56:41.126153   57719 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0410 22:56:41.129869   57719 out.go:177] 
	W0410 22:56:41.131207   57719 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0410 22:56:41.131286   57719 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0410 22:56:41.131326   57719 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0410 22:56:41.133049   57719 out.go:177] 
	
	
	==> CRI-O <==
	Apr 10 23:04:09 no-preload-646133 crio[725]: time="2024-04-10 23:04:09.334658189Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:f3638946755f9e70fdb9934d30a1922abe47fed13817278575a833f856edca95,Verbose:false,}" file="otel-collector/interceptors.go:62" id=b6fa14c8-9a1e-4277-a7fa-60d7d96122e6 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 10 23:04:09 no-preload-646133 crio[725]: time="2024-04-10 23:04:09.334869858Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:f3638946755f9e70fdb9934d30a1922abe47fed13817278575a833f856edca95,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1712789683699826241,StartedAt:1712789683828176402,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.30.0-rc.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b23d99268ec85dfc255b89a65a2b7a6,},Annotations:map[string]string{io.kubernetes.container.hash: 9c2c6ac3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/8b23d99268ec85dfc255b89a65a2b7a6/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/8b23d99268ec85dfc255b89a65a2b7a6/containers/kube-scheduler/907fde90,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-no-preload-646133_8b23d99268ec85dfc255b89a65a2b7a6/kube-scheduler/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{
CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=b6fa14c8-9a1e-4277-a7fa-60d7d96122e6 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 10 23:04:09 no-preload-646133 crio[725]: time="2024-04-10 23:04:09.358658749Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8d71f0c4-7479-4aed-a6d9-047c92747bc4 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:04:09 no-preload-646133 crio[725]: time="2024-04-10 23:04:09.358737987Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8d71f0c4-7479-4aed-a6d9-047c92747bc4 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:04:09 no-preload-646133 crio[725]: time="2024-04-10 23:04:09.360891455Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e1a90fec-d181-4acd-8bce-6c23ba0ed29b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:04:09 no-preload-646133 crio[725]: time="2024-04-10 23:04:09.361241091Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712790249361218810,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97389,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1a90fec-d181-4acd-8bce-6c23ba0ed29b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:04:09 no-preload-646133 crio[725]: time="2024-04-10 23:04:09.361919555Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3e51bbd4-78c0-49fd-b424-76bc253d6817 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:04:09 no-preload-646133 crio[725]: time="2024-04-10 23:04:09.361976378Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3e51bbd4-78c0-49fd-b424-76bc253d6817 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:04:09 no-preload-646133 crio[725]: time="2024-04-10 23:04:09.362179118Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec8d0d02104473d1b74d3f7cdd550cd9c1329263c9ae211f5d79d32a15895ae0,PodSandboxId:98aa9bcbe6e4737a4357fc234ee3619f5386c9435af0024c874ff0a61830d06d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789704759074785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v599p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f30c2827-5930-41d4-82b7-edfb839b3a74,},Annotations:map[string]string{io.kubernetes.container.hash: fdf46a31,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6e28085ff85adfb86327f333e1bfd9473635076de9a2742d0d7db843b0332df,PodSandboxId:41c42efa2a202eb5275bd92b43d655e2d97ce89294a073eb67f34163a410bf1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789704770245179,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm2zw,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9d8b995c-717e-43a5-a963-f07a4f7a76a8,},Annotations:map[string]string{io.kubernetes.container.hash: 20d22dca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a76d2c57e073bd9bc6ada95b65d50ec62897e37c5bceb09a83810b1013edc46,PodSandboxId:312e184bed65496636b4cf4bd275dde4ae1e62b7853d9b3ac120d7979d80980c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:69f89e5e13a4119f8752cd7f0fb30e3db4ce480a5e08580a3bf72597464bd061,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:69f89e5e13a4119f8752cd7f0fb30e3db4ce480a5e08580a3bf72597464bd061,State:CONTAINER_RUNNIN
G,CreatedAt:1712789704681414763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-24vhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca175e85-76f2-47d2-91a5-0248194a88e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3b62c1d4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6174309d2279bfc9949db4340f399294d0a6a8247adb8e4de618f5facb06854,PodSandboxId:5c61163b7108bf85d4537d8c77f569e0131b69953317227f9772345f55bbc2c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171278970433
8297649,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3232daa9-da88-4152-97c8-e86b3d50b0b8,},Annotations:map[string]string{io.kubernetes.container.hash: cbcd7332,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6a95b9d5058af0971fbe9adf827d0108e8ff6b55f972a8b472a87281cd5c8b3,PodSandboxId:23d5696225b73cd34b393dcbb17c06fffad8e530ba7eb26fe7c01152a53e47d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712789683747194956,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9048145b75d9f795053d905e2e8df6b,},Annotations:map[string]string{io.kubernetes.container.hash: 82d5fd8c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f63f2ae0fa5319f246ad59d82927c2ad707f20092e6b32af71a1ef8a06307d39,PodSandboxId:3c389c5244238f1502f663a622edcfcdb39c842cc4f2ae8928f4e315e184c244,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,State:CONTAINER_RUNNING,CreatedAt:1712789683719979983,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82910e12df56feafb80402f4702155af,},Annotations:map[string]string{io.kubernetes.container.hash: 16cce62d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fadf0431cad5782a96391f4d14bd31409f9f925c9e8eedcd6ab3b49a064480,PodSandboxId:25b1516115f454d2e578c2f96caaaf77dfdf11228328346a5e7cd260067cd299,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090,State:CONTAINER_RUNNING,CreatedAt:1712789683641924803,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77206909f47e74b9e84d7a2b5eedaafc,},Annotations:map[string]string{io.kubernetes.container.hash: 558a0b01,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3638946755f9e70fdb9934d30a1922abe47fed13817278575a833f856edca95,PodSandboxId:552181e571efbb16150f1f7d7ef33924726c87eebc816065701e2533cfc0e011,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b,State:CONTAINER_RUNNING,CreatedAt:1712789683649144649,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b23d99268ec85dfc255b89a65a2b7a6,},Annotations:map[string]string{io.kubernetes.container.hash: 9c2c6ac3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d763c490e6c5df0e625305c075d661241fa8d19dcca80f810ba34f1696f93e,PodSandboxId:3d26f66a41926c5e65c921e6568a934c5685981497e4ea29c9426bc6a5c737ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,State:CONTAINER_EXITED,CreatedAt:1712789389129474602,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82910e12df56feafb80402f4702155af,},Annotations:map[string]string{io.kubernetes.container.hash: 16cce62d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3e51bbd4-78c0-49fd-b424-76bc253d6817 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:04:09 no-preload-646133 crio[725]: time="2024-04-10 23:04:09.380726215Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=e7d13502-e2ba-47c5-b6d6-427599a6e3fa name=/runtime.v1.RuntimeService/Status
	Apr 10 23:04:09 no-preload-646133 crio[725]: time="2024-04-10 23:04:09.380800014Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=e7d13502-e2ba-47c5-b6d6-427599a6e3fa name=/runtime.v1.RuntimeService/Status
	Apr 10 23:04:09 no-preload-646133 crio[725]: time="2024-04-10 23:04:09.405672759Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2f2005a3-e8fe-4bbe-9589-05ab2e9e6742 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:04:09 no-preload-646133 crio[725]: time="2024-04-10 23:04:09.405779788Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2f2005a3-e8fe-4bbe-9589-05ab2e9e6742 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:04:09 no-preload-646133 crio[725]: time="2024-04-10 23:04:09.407322028Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=63a61562-bf02-48a5-a635-0894bee458cb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:04:09 no-preload-646133 crio[725]: time="2024-04-10 23:04:09.407760666Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712790249407735450,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97389,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=63a61562-bf02-48a5-a635-0894bee458cb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:04:09 no-preload-646133 crio[725]: time="2024-04-10 23:04:09.408308879Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0041bf43-5dcd-41de-95a3-a162531b14d1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:04:09 no-preload-646133 crio[725]: time="2024-04-10 23:04:09.408361410Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0041bf43-5dcd-41de-95a3-a162531b14d1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:04:09 no-preload-646133 crio[725]: time="2024-04-10 23:04:09.408600177Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec8d0d02104473d1b74d3f7cdd550cd9c1329263c9ae211f5d79d32a15895ae0,PodSandboxId:98aa9bcbe6e4737a4357fc234ee3619f5386c9435af0024c874ff0a61830d06d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789704759074785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v599p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f30c2827-5930-41d4-82b7-edfb839b3a74,},Annotations:map[string]string{io.kubernetes.container.hash: fdf46a31,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6e28085ff85adfb86327f333e1bfd9473635076de9a2742d0d7db843b0332df,PodSandboxId:41c42efa2a202eb5275bd92b43d655e2d97ce89294a073eb67f34163a410bf1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789704770245179,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm2zw,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9d8b995c-717e-43a5-a963-f07a4f7a76a8,},Annotations:map[string]string{io.kubernetes.container.hash: 20d22dca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a76d2c57e073bd9bc6ada95b65d50ec62897e37c5bceb09a83810b1013edc46,PodSandboxId:312e184bed65496636b4cf4bd275dde4ae1e62b7853d9b3ac120d7979d80980c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:69f89e5e13a4119f8752cd7f0fb30e3db4ce480a5e08580a3bf72597464bd061,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:69f89e5e13a4119f8752cd7f0fb30e3db4ce480a5e08580a3bf72597464bd061,State:CONTAINER_RUNNIN
G,CreatedAt:1712789704681414763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-24vhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca175e85-76f2-47d2-91a5-0248194a88e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3b62c1d4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6174309d2279bfc9949db4340f399294d0a6a8247adb8e4de618f5facb06854,PodSandboxId:5c61163b7108bf85d4537d8c77f569e0131b69953317227f9772345f55bbc2c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171278970433
8297649,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3232daa9-da88-4152-97c8-e86b3d50b0b8,},Annotations:map[string]string{io.kubernetes.container.hash: cbcd7332,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6a95b9d5058af0971fbe9adf827d0108e8ff6b55f972a8b472a87281cd5c8b3,PodSandboxId:23d5696225b73cd34b393dcbb17c06fffad8e530ba7eb26fe7c01152a53e47d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712789683747194956,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9048145b75d9f795053d905e2e8df6b,},Annotations:map[string]string{io.kubernetes.container.hash: 82d5fd8c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f63f2ae0fa5319f246ad59d82927c2ad707f20092e6b32af71a1ef8a06307d39,PodSandboxId:3c389c5244238f1502f663a622edcfcdb39c842cc4f2ae8928f4e315e184c244,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,State:CONTAINER_RUNNING,CreatedAt:1712789683719979983,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82910e12df56feafb80402f4702155af,},Annotations:map[string]string{io.kubernetes.container.hash: 16cce62d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fadf0431cad5782a96391f4d14bd31409f9f925c9e8eedcd6ab3b49a064480,PodSandboxId:25b1516115f454d2e578c2f96caaaf77dfdf11228328346a5e7cd260067cd299,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090,State:CONTAINER_RUNNING,CreatedAt:1712789683641924803,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77206909f47e74b9e84d7a2b5eedaafc,},Annotations:map[string]string{io.kubernetes.container.hash: 558a0b01,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3638946755f9e70fdb9934d30a1922abe47fed13817278575a833f856edca95,PodSandboxId:552181e571efbb16150f1f7d7ef33924726c87eebc816065701e2533cfc0e011,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b,State:CONTAINER_RUNNING,CreatedAt:1712789683649144649,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b23d99268ec85dfc255b89a65a2b7a6,},Annotations:map[string]string{io.kubernetes.container.hash: 9c2c6ac3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d763c490e6c5df0e625305c075d661241fa8d19dcca80f810ba34f1696f93e,PodSandboxId:3d26f66a41926c5e65c921e6568a934c5685981497e4ea29c9426bc6a5c737ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,State:CONTAINER_EXITED,CreatedAt:1712789389129474602,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82910e12df56feafb80402f4702155af,},Annotations:map[string]string{io.kubernetes.container.hash: 16cce62d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0041bf43-5dcd-41de-95a3-a162531b14d1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:04:09 no-preload-646133 crio[725]: time="2024-04-10 23:04:09.453570423Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e33fb60c-734a-4fcc-9ad9-cf787bb9f194 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:04:09 no-preload-646133 crio[725]: time="2024-04-10 23:04:09.453673292Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e33fb60c-734a-4fcc-9ad9-cf787bb9f194 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:04:09 no-preload-646133 crio[725]: time="2024-04-10 23:04:09.455341264Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d4474e01-b795-4047-a00a-53579afa9053 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:04:09 no-preload-646133 crio[725]: time="2024-04-10 23:04:09.455895983Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712790249455862103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97389,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d4474e01-b795-4047-a00a-53579afa9053 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:04:09 no-preload-646133 crio[725]: time="2024-04-10 23:04:09.457073766Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b8c0509-3453-4df2-85b1-6f66f533dd6f name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:04:09 no-preload-646133 crio[725]: time="2024-04-10 23:04:09.457148373Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b8c0509-3453-4df2-85b1-6f66f533dd6f name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:04:09 no-preload-646133 crio[725]: time="2024-04-10 23:04:09.457665876Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec8d0d02104473d1b74d3f7cdd550cd9c1329263c9ae211f5d79d32a15895ae0,PodSandboxId:98aa9bcbe6e4737a4357fc234ee3619f5386c9435af0024c874ff0a61830d06d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789704759074785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v599p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f30c2827-5930-41d4-82b7-edfb839b3a74,},Annotations:map[string]string{io.kubernetes.container.hash: fdf46a31,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6e28085ff85adfb86327f333e1bfd9473635076de9a2742d0d7db843b0332df,PodSandboxId:41c42efa2a202eb5275bd92b43d655e2d97ce89294a073eb67f34163a410bf1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789704770245179,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm2zw,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9d8b995c-717e-43a5-a963-f07a4f7a76a8,},Annotations:map[string]string{io.kubernetes.container.hash: 20d22dca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a76d2c57e073bd9bc6ada95b65d50ec62897e37c5bceb09a83810b1013edc46,PodSandboxId:312e184bed65496636b4cf4bd275dde4ae1e62b7853d9b3ac120d7979d80980c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:69f89e5e13a4119f8752cd7f0fb30e3db4ce480a5e08580a3bf72597464bd061,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:69f89e5e13a4119f8752cd7f0fb30e3db4ce480a5e08580a3bf72597464bd061,State:CONTAINER_RUNNIN
G,CreatedAt:1712789704681414763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-24vhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca175e85-76f2-47d2-91a5-0248194a88e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3b62c1d4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6174309d2279bfc9949db4340f399294d0a6a8247adb8e4de618f5facb06854,PodSandboxId:5c61163b7108bf85d4537d8c77f569e0131b69953317227f9772345f55bbc2c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171278970433
8297649,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3232daa9-da88-4152-97c8-e86b3d50b0b8,},Annotations:map[string]string{io.kubernetes.container.hash: cbcd7332,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6a95b9d5058af0971fbe9adf827d0108e8ff6b55f972a8b472a87281cd5c8b3,PodSandboxId:23d5696225b73cd34b393dcbb17c06fffad8e530ba7eb26fe7c01152a53e47d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712789683747194956,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9048145b75d9f795053d905e2e8df6b,},Annotations:map[string]string{io.kubernetes.container.hash: 82d5fd8c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f63f2ae0fa5319f246ad59d82927c2ad707f20092e6b32af71a1ef8a06307d39,PodSandboxId:3c389c5244238f1502f663a622edcfcdb39c842cc4f2ae8928f4e315e184c244,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,State:CONTAINER_RUNNING,CreatedAt:1712789683719979983,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82910e12df56feafb80402f4702155af,},Annotations:map[string]string{io.kubernetes.container.hash: 16cce62d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fadf0431cad5782a96391f4d14bd31409f9f925c9e8eedcd6ab3b49a064480,PodSandboxId:25b1516115f454d2e578c2f96caaaf77dfdf11228328346a5e7cd260067cd299,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090,State:CONTAINER_RUNNING,CreatedAt:1712789683641924803,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77206909f47e74b9e84d7a2b5eedaafc,},Annotations:map[string]string{io.kubernetes.container.hash: 558a0b01,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3638946755f9e70fdb9934d30a1922abe47fed13817278575a833f856edca95,PodSandboxId:552181e571efbb16150f1f7d7ef33924726c87eebc816065701e2533cfc0e011,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b,State:CONTAINER_RUNNING,CreatedAt:1712789683649144649,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b23d99268ec85dfc255b89a65a2b7a6,},Annotations:map[string]string{io.kubernetes.container.hash: 9c2c6ac3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d763c490e6c5df0e625305c075d661241fa8d19dcca80f810ba34f1696f93e,PodSandboxId:3d26f66a41926c5e65c921e6568a934c5685981497e4ea29c9426bc6a5c737ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,State:CONTAINER_EXITED,CreatedAt:1712789389129474602,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82910e12df56feafb80402f4702155af,},Annotations:map[string]string{io.kubernetes.container.hash: 16cce62d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2b8c0509-3453-4df2-85b1-6f66f533dd6f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e6e28085ff85a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   41c42efa2a202       coredns-7db6d8ff4d-jm2zw
	ec8d0d0210447       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   98aa9bcbe6e47       coredns-7db6d8ff4d-v599p
	4a76d2c57e073       69f89e5e13a4119f8752cd7f0fb30e3db4ce480a5e08580a3bf72597464bd061   9 minutes ago       Running             kube-proxy                0                   312e184bed654       kube-proxy-24vhc
	d6174309d2279       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   5c61163b7108b       storage-provisioner
	e6a95b9d5058a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   23d5696225b73       etcd-no-preload-646133
	f63f2ae0fa531       bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895   9 minutes ago       Running             kube-apiserver            2                   3c389c5244238       kube-apiserver-no-preload-646133
	f3638946755f9       ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b   9 minutes ago       Running             kube-scheduler            2                   552181e571efb       kube-scheduler-no-preload-646133
	60fadf0431cad       577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090   9 minutes ago       Running             kube-controller-manager   2                   25b1516115f45       kube-controller-manager-no-preload-646133
	71d763c490e6c       bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895   14 minutes ago      Exited              kube-apiserver            1                   3d26f66a41926       kube-apiserver-no-preload-646133
	
	
	==> coredns [e6e28085ff85adfb86327f333e1bfd9473635076de9a2742d0d7db843b0332df] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ec8d0d02104473d1b74d3f7cdd550cd9c1329263c9ae211f5d79d32a15895ae0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-646133
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-646133
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2
	                    minikube.k8s.io/name=no-preload-646133
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_10T22_54_49_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Apr 2024 22:54:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-646133
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Apr 2024 23:04:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Apr 2024 23:00:16 +0000   Wed, 10 Apr 2024 22:54:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Apr 2024 23:00:16 +0000   Wed, 10 Apr 2024 22:54:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Apr 2024 23:00:16 +0000   Wed, 10 Apr 2024 22:54:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Apr 2024 23:00:16 +0000   Wed, 10 Apr 2024 22:54:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.17
	  Hostname:    no-preload-646133
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d8efe7b83d024249b9b4267a60de5316
	  System UUID:                d8efe7b8-3d02-4249-b9b4-267a60de5316
	  Boot ID:                    6711f87d-c85c-484a-a5ca-3dbae181297c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.1
	  Kube-Proxy Version:         v1.30.0-rc.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-jm2zw                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-7db6d8ff4d-v599p                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-no-preload-646133                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-apiserver-no-preload-646133             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-controller-manager-no-preload-646133    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-proxy-24vhc                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-no-preload-646133             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-569cc877fc-bj59f              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m6s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m4s   kube-proxy       
	  Normal  Starting                 9m20s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m20s  kubelet          Node no-preload-646133 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s  kubelet          Node no-preload-646133 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s  kubelet          Node no-preload-646133 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m8s   node-controller  Node no-preload-646133 event: Registered Node no-preload-646133 in Controller
	
	
	==> dmesg <==
	[  +0.054370] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042767] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.902286] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.003041] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.648525] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.457128] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.062454] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.082338] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.168635] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.133630] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.293343] systemd-fstab-generator[709]: Ignoring "noauto" option for root device
	[ +17.290979] systemd-fstab-generator[1223]: Ignoring "noauto" option for root device
	[  +0.062501] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.355108] systemd-fstab-generator[1347]: Ignoring "noauto" option for root device
	[  +4.656273] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.709023] kauditd_printk_skb: 79 callbacks suppressed
	[Apr10 22:54] kauditd_printk_skb: 8 callbacks suppressed
	[  +1.343083] systemd-fstab-generator[3991]: Ignoring "noauto" option for root device
	[  +6.554737] systemd-fstab-generator[4313]: Ignoring "noauto" option for root device
	[  +0.091931] kauditd_printk_skb: 54 callbacks suppressed
	[Apr10 22:55] systemd-fstab-generator[4515]: Ignoring "noauto" option for root device
	[  +0.118390] kauditd_printk_skb: 12 callbacks suppressed
	[Apr10 22:56] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [e6a95b9d5058af0971fbe9adf827d0108e8ff6b55f972a8b472a87281cd5c8b3] <==
	{"level":"info","ts":"2024-04-10T22:54:44.129165Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a74ab9f845be4a88 switched to configuration voters=(12054651828583680648)"}
	{"level":"info","ts":"2024-04-10T22:54:44.129316Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e7a7808069af5882","local-member-id":"a74ab9f845be4a88","added-peer-id":"a74ab9f845be4a88","added-peer-peer-urls":["https://192.168.50.17:2380"]}
	{"level":"info","ts":"2024-04-10T22:54:44.143163Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-10T22:54:44.143402Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"a74ab9f845be4a88","initial-advertise-peer-urls":["https://192.168.50.17:2380"],"listen-peer-urls":["https://192.168.50.17:2380"],"advertise-client-urls":["https://192.168.50.17:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.17:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-10T22:54:44.143471Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-10T22:54:44.143753Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.17:2380"}
	{"level":"info","ts":"2024-04-10T22:54:44.143792Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.17:2380"}
	{"level":"info","ts":"2024-04-10T22:54:44.189646Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a74ab9f845be4a88 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-10T22:54:44.189826Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a74ab9f845be4a88 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-10T22:54:44.189859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a74ab9f845be4a88 received MsgPreVoteResp from a74ab9f845be4a88 at term 1"}
	{"level":"info","ts":"2024-04-10T22:54:44.190101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a74ab9f845be4a88 became candidate at term 2"}
	{"level":"info","ts":"2024-04-10T22:54:44.190209Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a74ab9f845be4a88 received MsgVoteResp from a74ab9f845be4a88 at term 2"}
	{"level":"info","ts":"2024-04-10T22:54:44.190243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a74ab9f845be4a88 became leader at term 2"}
	{"level":"info","ts":"2024-04-10T22:54:44.190317Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a74ab9f845be4a88 elected leader a74ab9f845be4a88 at term 2"}
	{"level":"info","ts":"2024-04-10T22:54:44.195057Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-10T22:54:44.195977Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"a74ab9f845be4a88","local-member-attributes":"{Name:no-preload-646133 ClientURLs:[https://192.168.50.17:2379]}","request-path":"/0/members/a74ab9f845be4a88/attributes","cluster-id":"e7a7808069af5882","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-10T22:54:44.196203Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-10T22:54:44.200618Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-10T22:54:44.200736Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-10T22:54:44.196487Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-10T22:54:44.196672Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e7a7808069af5882","local-member-id":"a74ab9f845be4a88","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-10T22:54:44.201228Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-10T22:54:44.201285Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-10T22:54:44.206873Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.17:2379"}
	{"level":"info","ts":"2024-04-10T22:54:44.259736Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 23:04:09 up 14 min,  0 users,  load average: 0.32, 0.28, 0.20
	Linux no-preload-646133 5.10.207 #1 SMP Wed Apr 10 14:57:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [71d763c490e6c5df0e625305c075d661241fa8d19dcca80f810ba34f1696f93e] <==
	W0410 22:54:35.654229       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:35.662146       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:35.668749       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:35.733350       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:35.738034       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:35.846621       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:35.846945       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:35.867725       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:35.958048       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:36.023135       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:36.086929       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:36.104877       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:36.160055       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:36.338403       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:36.380038       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:36.395103       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:36.483037       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:36.579987       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:36.697149       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:36.781862       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:36.960485       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:37.121992       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:37.137335       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:37.222690       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:37.339828       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f63f2ae0fa5319f246ad59d82927c2ad707f20092e6b32af71a1ef8a06307d39] <==
	I0410 22:58:04.696859       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0410 22:59:46.461705       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 22:59:46.461840       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0410 22:59:47.461945       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 22:59:47.462075       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0410 22:59:47.462102       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0410 22:59:47.462667       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 22:59:47.462723       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0410 22:59:47.463298       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0410 23:00:47.462878       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 23:00:47.463151       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0410 23:00:47.463191       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0410 23:00:47.464276       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 23:00:47.464338       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0410 23:00:47.464363       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0410 23:02:47.463415       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 23:02:47.463677       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0410 23:02:47.463689       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0410 23:02:47.465617       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 23:02:47.465670       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0410 23:02:47.465681       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [60fadf0431cad5782a96391f4d14bd31409f9f925c9e8eedcd6ab3b49a064480] <==
	I0410 22:58:32.450483       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 22:59:02.006646       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 22:59:02.458938       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 22:59:32.012252       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 22:59:32.466410       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:00:02.018531       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:00:02.474878       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:00:32.025424       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:00:32.484813       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:01:02.031746       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:01:02.494310       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0410 23:01:04.248765       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="280.842µs"
	I0410 23:01:19.247127       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="257.459µs"
	E0410 23:01:32.037362       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:01:32.504409       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:02:02.043700       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:02:02.512895       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:02:32.049913       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:02:32.522087       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:03:02.057774       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:03:02.532716       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:03:32.064350       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:03:32.541221       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:04:02.070393       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:04:02.549380       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [4a76d2c57e073bd9bc6ada95b65d50ec62897e37c5bceb09a83810b1013edc46] <==
	I0410 22:55:05.170138       1 server_linux.go:69] "Using iptables proxy"
	I0410 22:55:05.194321       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.17"]
	I0410 22:55:05.246241       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0410 22:55:05.246404       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0410 22:55:05.246444       1 server_linux.go:165] "Using iptables Proxier"
	I0410 22:55:05.249897       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0410 22:55:05.250143       1 server.go:872] "Version info" version="v1.30.0-rc.1"
	I0410 22:55:05.250190       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0410 22:55:05.252259       1 config.go:192] "Starting service config controller"
	I0410 22:55:05.252314       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0410 22:55:05.252366       1 config.go:101] "Starting endpoint slice config controller"
	I0410 22:55:05.252382       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0410 22:55:05.254571       1 config.go:319] "Starting node config controller"
	I0410 22:55:05.254622       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0410 22:55:05.352928       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0410 22:55:05.352993       1 shared_informer.go:320] Caches are synced for service config
	I0410 22:55:05.354974       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f3638946755f9e70fdb9934d30a1922abe47fed13817278575a833f856edca95] <==
	E0410 22:54:46.504473       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0410 22:54:46.504559       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0410 22:54:46.504681       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0410 22:54:46.504794       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0410 22:54:47.343668       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0410 22:54:47.343725       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0410 22:54:47.374045       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0410 22:54:47.374107       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0410 22:54:47.465812       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0410 22:54:47.466057       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0410 22:54:47.474841       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0410 22:54:47.474964       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0410 22:54:47.505477       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0410 22:54:47.505980       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0410 22:54:47.569782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0410 22:54:47.570083       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0410 22:54:47.644334       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0410 22:54:47.644408       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0410 22:54:47.644459       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0410 22:54:47.644541       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0410 22:54:47.681162       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0410 22:54:47.683351       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0410 22:54:47.991021       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0410 22:54:47.991079       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0410 22:54:50.257400       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 10 23:01:49 no-preload-646133 kubelet[4320]: E0410 23:01:49.254430    4320 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 10 23:01:49 no-preload-646133 kubelet[4320]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 10 23:01:49 no-preload-646133 kubelet[4320]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 10 23:01:49 no-preload-646133 kubelet[4320]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 10 23:01:49 no-preload-646133 kubelet[4320]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 10 23:01:59 no-preload-646133 kubelet[4320]: E0410 23:01:59.229106    4320 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bj59f" podUID="4aace435-90be-456a-8a85-dbee0026212c"
	Apr 10 23:02:14 no-preload-646133 kubelet[4320]: E0410 23:02:14.228941    4320 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bj59f" podUID="4aace435-90be-456a-8a85-dbee0026212c"
	Apr 10 23:02:29 no-preload-646133 kubelet[4320]: E0410 23:02:29.229141    4320 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bj59f" podUID="4aace435-90be-456a-8a85-dbee0026212c"
	Apr 10 23:02:42 no-preload-646133 kubelet[4320]: E0410 23:02:42.228075    4320 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bj59f" podUID="4aace435-90be-456a-8a85-dbee0026212c"
	Apr 10 23:02:49 no-preload-646133 kubelet[4320]: E0410 23:02:49.254281    4320 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 10 23:02:49 no-preload-646133 kubelet[4320]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 10 23:02:49 no-preload-646133 kubelet[4320]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 10 23:02:49 no-preload-646133 kubelet[4320]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 10 23:02:49 no-preload-646133 kubelet[4320]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 10 23:02:56 no-preload-646133 kubelet[4320]: E0410 23:02:56.228141    4320 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bj59f" podUID="4aace435-90be-456a-8a85-dbee0026212c"
	Apr 10 23:03:10 no-preload-646133 kubelet[4320]: E0410 23:03:10.228459    4320 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bj59f" podUID="4aace435-90be-456a-8a85-dbee0026212c"
	Apr 10 23:03:24 no-preload-646133 kubelet[4320]: E0410 23:03:24.228400    4320 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bj59f" podUID="4aace435-90be-456a-8a85-dbee0026212c"
	Apr 10 23:03:37 no-preload-646133 kubelet[4320]: E0410 23:03:37.231273    4320 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bj59f" podUID="4aace435-90be-456a-8a85-dbee0026212c"
	Apr 10 23:03:48 no-preload-646133 kubelet[4320]: E0410 23:03:48.228084    4320 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bj59f" podUID="4aace435-90be-456a-8a85-dbee0026212c"
	Apr 10 23:03:49 no-preload-646133 kubelet[4320]: E0410 23:03:49.253953    4320 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 10 23:03:49 no-preload-646133 kubelet[4320]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 10 23:03:49 no-preload-646133 kubelet[4320]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 10 23:03:49 no-preload-646133 kubelet[4320]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 10 23:03:49 no-preload-646133 kubelet[4320]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 10 23:04:03 no-preload-646133 kubelet[4320]: E0410 23:04:03.228566    4320 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bj59f" podUID="4aace435-90be-456a-8a85-dbee0026212c"
	
	
	==> storage-provisioner [d6174309d2279bfc9949db4340f399294d0a6a8247adb8e4de618f5facb06854] <==
	I0410 22:55:04.822158       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0410 22:55:04.880121       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0410 22:55:04.884929       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0410 22:55:04.918007       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0410 22:55:04.918289       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-646133_b89aa634-ed5c-460a-8459-c995874103cc!
	I0410 22:55:04.918941       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"724620de-3bae-438f-81ec-b58b460a9711", APIVersion:"v1", ResourceVersion:"397", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-646133_b89aa634-ed5c-460a-8459-c995874103cc became leader
	I0410 22:55:05.018613       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-646133_b89aa634-ed5c-460a-8459-c995874103cc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-646133 -n no-preload-646133
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-646133 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-bj59f
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-646133 describe pod metrics-server-569cc877fc-bj59f
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-646133 describe pod metrics-server-569cc877fc-bj59f: exit status 1 (65.167352ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-bj59f" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-646133 describe pod metrics-server-569cc877fc-bj59f: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
E0410 22:56:54.112300   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
E0410 22:56:59.610095   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
E0410 23:01:54.112015   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
E0410 23:01:59.610373   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
E0410 23:05:02.659418   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-862528 -n old-k8s-version-862528
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-862528 -n old-k8s-version-862528: exit status 2 (245.774844ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-862528" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-862528 -n old-k8s-version-862528
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-862528 -n old-k8s-version-862528: exit status 2 (250.924698ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-862528 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-862528 logs -n 25: (1.581668282s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| stop    | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC | 10 Apr 24 22:40 UTC |
	| start   | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC | 10 Apr 24 22:40 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.1                      |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-646133             | no-preload-646133            | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC | 10 Apr 24 22:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-646133                                   | no-preload-646133            | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC | 10 Apr 24 22:41 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.1                      |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:41 UTC | 10 Apr 24 22:41 UTC |
	| start   | -p embed-certs-706500                                  | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:41 UTC | 10 Apr 24 22:42 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-706500            | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:42 UTC | 10 Apr 24 22:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-706500                                  | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-862528        | old-k8s-version-862528       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:42 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-646133                  | no-preload-646133            | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| delete  | -p cert-expiration-464519                              | cert-expiration-464519       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:43 UTC |
	| delete  | -p                                                     | disable-driver-mounts-676292 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:43 UTC |
	|         | disable-driver-mounts-676292                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:44 UTC |
	|         | default-k8s-diff-port-519831                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| start   | -p no-preload-646133                                   | no-preload-646133            | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:55 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.1                      |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-862528                              | old-k8s-version-862528       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-862528             | old-k8s-version-862528       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC | 10 Apr 24 22:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-862528                              | old-k8s-version-862528       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-519831  | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC | 10 Apr 24 22:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC |                     |
	|         | default-k8s-diff-port-519831                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-706500                 | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-706500                                  | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC | 10 Apr 24 22:54 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-519831       | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:46 UTC | 10 Apr 24 22:53 UTC |
	|         | default-k8s-diff-port-519831                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/10 22:46:47
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0410 22:46:47.395706   58701 out.go:291] Setting OutFile to fd 1 ...
	I0410 22:46:47.395991   58701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:46:47.396002   58701 out.go:304] Setting ErrFile to fd 2...
	I0410 22:46:47.396019   58701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:46:47.396208   58701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 22:46:47.396802   58701 out.go:298] Setting JSON to false
	I0410 22:46:47.397726   58701 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5350,"bootTime":1712783858,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0410 22:46:47.397786   58701 start.go:139] virtualization: kvm guest
	I0410 22:46:47.400191   58701 out.go:177] * [default-k8s-diff-port-519831] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0410 22:46:47.401578   58701 notify.go:220] Checking for updates...
	I0410 22:46:47.402880   58701 out.go:177]   - MINIKUBE_LOCATION=18610
	I0410 22:46:47.404311   58701 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0410 22:46:47.405790   58701 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:46:47.407012   58701 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 22:46:47.408130   58701 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0410 22:46:47.409497   58701 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0410 22:46:47.411183   58701 config.go:182] Loaded profile config "default-k8s-diff-port-519831": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:46:47.411591   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:46:47.411632   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:46:47.426322   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42887
	I0410 22:46:47.426759   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:46:47.427345   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:46:47.427366   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:46:47.427716   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:46:47.427926   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:46:47.428221   58701 driver.go:392] Setting default libvirt URI to qemu:///system
	I0410 22:46:47.428646   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:46:47.428696   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:46:47.444105   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40099
	I0410 22:46:47.444537   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:46:47.445035   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:46:47.445058   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:46:47.445398   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:46:47.445592   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:46:47.480451   58701 out.go:177] * Using the kvm2 driver based on existing profile
	I0410 22:46:47.481837   58701 start.go:297] selected driver: kvm2
	I0410 22:46:47.481852   58701 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-519831 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-519831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.170 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:46:47.481985   58701 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0410 22:46:47.482657   58701 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 22:46:47.482750   58701 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18610-5679/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0410 22:46:47.498330   58701 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0410 22:46:47.498668   58701 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 22:46:47.498735   58701 cni.go:84] Creating CNI manager for ""
	I0410 22:46:47.498748   58701 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:46:47.498784   58701 start.go:340] cluster config:
	{Name:default-k8s-diff-port-519831 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-519831 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.170 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:46:47.498877   58701 iso.go:125] acquiring lock: {Name:mk5a268a8a4dbf02629446a098c1de60574dce10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 22:46:47.500723   58701 out.go:177] * Starting "default-k8s-diff-port-519831" primary control-plane node in "default-k8s-diff-port-519831" cluster
	I0410 22:46:47.180678   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:46:47.501967   58701 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 22:46:47.502009   58701 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0410 22:46:47.502030   58701 cache.go:56] Caching tarball of preloaded images
	I0410 22:46:47.502108   58701 preload.go:173] Found /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0410 22:46:47.502118   58701 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0410 22:46:47.502202   58701 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/config.json ...
	I0410 22:46:47.502366   58701 start.go:360] acquireMachinesLock for default-k8s-diff-port-519831: {Name:mkcfcfa8b01b63726303cd37d33f01d452118635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0410 22:46:50.252732   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:46:56.332647   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:46:59.404660   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:05.484717   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:08.556632   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:14.636753   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:17.708788   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:23.788661   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:26.860683   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:32.940630   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:36.012689   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:42.092749   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:45.164706   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:51.244682   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:54.316652   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:48:00.396637   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:48:03.468672   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:48:06.472768   57719 start.go:364] duration metric: took 4m5.937893783s to acquireMachinesLock for "old-k8s-version-862528"
	I0410 22:48:06.472833   57719 start.go:96] Skipping create...Using existing machine configuration
	I0410 22:48:06.472852   57719 fix.go:54] fixHost starting: 
	I0410 22:48:06.473157   57719 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:48:06.473186   57719 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:48:06.488728   57719 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41927
	I0410 22:48:06.489157   57719 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:48:06.489590   57719 main.go:141] libmachine: Using API Version  1
	I0410 22:48:06.489612   57719 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:48:06.490011   57719 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:48:06.490171   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:06.490337   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetState
	I0410 22:48:06.491997   57719 fix.go:112] recreateIfNeeded on old-k8s-version-862528: state=Stopped err=<nil>
	I0410 22:48:06.492030   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	W0410 22:48:06.492234   57719 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 22:48:06.493891   57719 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-862528" ...
	I0410 22:48:06.469869   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:48:06.469904   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetMachineName
	I0410 22:48:06.470235   57270 buildroot.go:166] provisioning hostname "no-preload-646133"
	I0410 22:48:06.470261   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetMachineName
	I0410 22:48:06.470529   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:48:06.472589   57270 machine.go:97] duration metric: took 4m35.561692081s to provisionDockerMachine
	I0410 22:48:06.472636   57270 fix.go:56] duration metric: took 4m35.586484815s for fixHost
	I0410 22:48:06.472646   57270 start.go:83] releasing machines lock for "no-preload-646133", held for 4m35.586540892s
	W0410 22:48:06.472671   57270 start.go:713] error starting host: provision: host is not running
	W0410 22:48:06.472773   57270 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0410 22:48:06.472785   57270 start.go:728] Will try again in 5 seconds ...
	I0410 22:48:06.495233   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .Start
	I0410 22:48:06.495416   57719 main.go:141] libmachine: (old-k8s-version-862528) Ensuring networks are active...
	I0410 22:48:06.496254   57719 main.go:141] libmachine: (old-k8s-version-862528) Ensuring network default is active
	I0410 22:48:06.496589   57719 main.go:141] libmachine: (old-k8s-version-862528) Ensuring network mk-old-k8s-version-862528 is active
	I0410 22:48:06.497002   57719 main.go:141] libmachine: (old-k8s-version-862528) Getting domain xml...
	I0410 22:48:06.497751   57719 main.go:141] libmachine: (old-k8s-version-862528) Creating domain...
	I0410 22:48:07.722703   57719 main.go:141] libmachine: (old-k8s-version-862528) Waiting to get IP...
	I0410 22:48:07.723942   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:07.724373   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:07.724451   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:07.724338   59021 retry.go:31] will retry after 284.455366ms: waiting for machine to come up
	I0410 22:48:08.011077   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:08.011598   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:08.011628   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:08.011545   59021 retry.go:31] will retry after 337.946102ms: waiting for machine to come up
	I0410 22:48:08.351219   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:08.351725   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:08.351744   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:08.351691   59021 retry.go:31] will retry after 454.774669ms: waiting for machine to come up
	I0410 22:48:08.808516   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:08.808953   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:08.808991   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:08.808893   59021 retry.go:31] will retry after 484.667282ms: waiting for machine to come up
	I0410 22:48:09.295665   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:09.296127   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:09.296148   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:09.296083   59021 retry.go:31] will retry after 515.00238ms: waiting for machine to come up
	I0410 22:48:09.812855   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:09.813337   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:09.813362   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:09.813289   59021 retry.go:31] will retry after 596.67118ms: waiting for machine to come up
	I0410 22:48:10.411103   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:10.411616   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:10.411640   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:10.411568   59021 retry.go:31] will retry after 1.035822512s: waiting for machine to come up
	I0410 22:48:11.473748   57270 start.go:360] acquireMachinesLock for no-preload-646133: {Name:mkcfcfa8b01b63726303cd37d33f01d452118635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0410 22:48:11.448894   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:11.449358   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:11.449388   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:11.449315   59021 retry.go:31] will retry after 1.258446774s: waiting for machine to come up
	I0410 22:48:12.709048   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:12.709587   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:12.709618   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:12.709530   59021 retry.go:31] will retry after 1.149380432s: waiting for machine to come up
	I0410 22:48:13.860550   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:13.861084   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:13.861110   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:13.861028   59021 retry.go:31] will retry after 1.733388735s: waiting for machine to come up
	I0410 22:48:15.595870   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:15.596447   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:15.596487   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:15.596343   59021 retry.go:31] will retry after 2.536794123s: waiting for machine to come up
	I0410 22:48:18.135592   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:18.136099   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:18.136128   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:18.136056   59021 retry.go:31] will retry after 3.390395523s: waiting for machine to come up
	I0410 22:48:21.528518   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:21.528964   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:21.529008   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:21.528906   59021 retry.go:31] will retry after 4.165145769s: waiting for machine to come up
	I0410 22:48:26.977460   58186 start.go:364] duration metric: took 3m29.815175662s to acquireMachinesLock for "embed-certs-706500"
	I0410 22:48:26.977524   58186 start.go:96] Skipping create...Using existing machine configuration
	I0410 22:48:26.977532   58186 fix.go:54] fixHost starting: 
	I0410 22:48:26.977935   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:48:26.977965   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:48:26.994175   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36291
	I0410 22:48:26.994552   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:48:26.995016   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:48:26.995040   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:48:26.995447   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:48:26.995652   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:26.995826   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetState
	I0410 22:48:26.997547   58186 fix.go:112] recreateIfNeeded on embed-certs-706500: state=Stopped err=<nil>
	I0410 22:48:26.997580   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	W0410 22:48:26.997902   58186 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 22:48:27.000500   58186 out.go:177] * Restarting existing kvm2 VM for "embed-certs-706500" ...
	I0410 22:48:27.002204   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Start
	I0410 22:48:27.002398   58186 main.go:141] libmachine: (embed-certs-706500) Ensuring networks are active...
	I0410 22:48:27.003133   58186 main.go:141] libmachine: (embed-certs-706500) Ensuring network default is active
	I0410 22:48:27.003465   58186 main.go:141] libmachine: (embed-certs-706500) Ensuring network mk-embed-certs-706500 is active
	I0410 22:48:27.003863   58186 main.go:141] libmachine: (embed-certs-706500) Getting domain xml...
	I0410 22:48:27.004603   58186 main.go:141] libmachine: (embed-certs-706500) Creating domain...
	I0410 22:48:25.699595   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.700129   57719 main.go:141] libmachine: (old-k8s-version-862528) Found IP for machine: 192.168.61.178
	I0410 22:48:25.700159   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has current primary IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.700166   57719 main.go:141] libmachine: (old-k8s-version-862528) Reserving static IP address...
	I0410 22:48:25.700654   57719 main.go:141] libmachine: (old-k8s-version-862528) Reserved static IP address: 192.168.61.178
	I0410 22:48:25.700676   57719 main.go:141] libmachine: (old-k8s-version-862528) Waiting for SSH to be available...
	I0410 22:48:25.700704   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "old-k8s-version-862528", mac: "52:54:00:d0:b7:c9", ip: "192.168.61.178"} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.700732   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | skip adding static IP to network mk-old-k8s-version-862528 - found existing host DHCP lease matching {name: "old-k8s-version-862528", mac: "52:54:00:d0:b7:c9", ip: "192.168.61.178"}
	I0410 22:48:25.700745   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | Getting to WaitForSSH function...
	I0410 22:48:25.702929   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.703290   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.703322   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.703490   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | Using SSH client type: external
	I0410 22:48:25.703519   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | Using SSH private key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa (-rw-------)
	I0410 22:48:25.703551   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.178 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0410 22:48:25.703590   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | About to run SSH command:
	I0410 22:48:25.703635   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | exit 0
	I0410 22:48:25.832738   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | SSH cmd err, output: <nil>: 
	I0410 22:48:25.833133   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetConfigRaw
	I0410 22:48:25.833784   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetIP
	I0410 22:48:25.836323   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.836874   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.836908   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.837156   57719 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/config.json ...
	I0410 22:48:25.837472   57719 machine.go:94] provisionDockerMachine start ...
	I0410 22:48:25.837502   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:25.837710   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:25.840159   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.840488   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.840516   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.840593   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:25.840815   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:25.840992   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:25.841134   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:25.841337   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:25.841543   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:25.841556   57719 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 22:48:25.957153   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0410 22:48:25.957189   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetMachineName
	I0410 22:48:25.957438   57719 buildroot.go:166] provisioning hostname "old-k8s-version-862528"
	I0410 22:48:25.957461   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetMachineName
	I0410 22:48:25.957679   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:25.960779   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.961149   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.961184   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.961332   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:25.961546   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:25.961689   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:25.961864   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:25.962020   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:25.962196   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:25.962207   57719 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-862528 && echo "old-k8s-version-862528" | sudo tee /etc/hostname
	I0410 22:48:26.087073   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-862528
	
	I0410 22:48:26.087099   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.089770   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.090109   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.090140   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.090261   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.090446   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.090623   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.090760   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.090951   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:26.091131   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:26.091155   57719 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-862528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-862528/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-862528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:48:26.214422   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:48:26.214462   57719 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:48:26.214490   57719 buildroot.go:174] setting up certificates
	I0410 22:48:26.214498   57719 provision.go:84] configureAuth start
	I0410 22:48:26.214509   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetMachineName
	I0410 22:48:26.214793   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetIP
	I0410 22:48:26.217463   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.217809   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.217850   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.217975   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.219971   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.220235   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.220265   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.220480   57719 provision.go:143] copyHostCerts
	I0410 22:48:26.220526   57719 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:48:26.220542   57719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:48:26.220604   57719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:48:26.220703   57719 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:48:26.220712   57719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:48:26.220736   57719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:48:26.220789   57719 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:48:26.220796   57719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:48:26.220817   57719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:48:26.220864   57719 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-862528 san=[127.0.0.1 192.168.61.178 localhost minikube old-k8s-version-862528]
	I0410 22:48:26.288372   57719 provision.go:177] copyRemoteCerts
	I0410 22:48:26.288445   57719 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:48:26.288468   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.290980   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.291298   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.291339   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.291444   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.291635   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.291809   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.291927   57719 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa Username:docker}
	I0410 22:48:26.379823   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:48:26.405285   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0410 22:48:26.430122   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0410 22:48:26.456124   57719 provision.go:87] duration metric: took 241.614364ms to configureAuth
	I0410 22:48:26.456154   57719 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:48:26.456356   57719 config.go:182] Loaded profile config "old-k8s-version-862528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0410 22:48:26.456480   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.459028   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.459335   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.459366   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.459558   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.459742   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.459888   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.460037   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.460211   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:26.460379   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:26.460413   57719 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:48:26.732588   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:48:26.732614   57719 machine.go:97] duration metric: took 895.122467ms to provisionDockerMachine
	I0410 22:48:26.732627   57719 start.go:293] postStartSetup for "old-k8s-version-862528" (driver="kvm2")
	I0410 22:48:26.732641   57719 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:48:26.732679   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.733014   57719 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:48:26.733044   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.735820   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.736217   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.736244   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.736418   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.736630   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.736840   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.737020   57719 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa Username:docker}
	I0410 22:48:26.823452   57719 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:48:26.827806   57719 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:48:26.827827   57719 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:48:26.827899   57719 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:48:26.828009   57719 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:48:26.828122   57719 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:48:26.837564   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:48:26.862278   57719 start.go:296] duration metric: took 129.638185ms for postStartSetup
	I0410 22:48:26.862325   57719 fix.go:56] duration metric: took 20.389482643s for fixHost
	I0410 22:48:26.862346   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.864911   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.865277   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.865301   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.865419   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.865597   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.865742   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.865872   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.866083   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:26.866283   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:26.866300   57719 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 22:48:26.977317   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712789306.948982315
	
	I0410 22:48:26.977337   57719 fix.go:216] guest clock: 1712789306.948982315
	I0410 22:48:26.977344   57719 fix.go:229] Guest: 2024-04-10 22:48:26.948982315 +0000 UTC Remote: 2024-04-10 22:48:26.862329953 +0000 UTC m=+266.486936912 (delta=86.652362ms)
	I0410 22:48:26.977362   57719 fix.go:200] guest clock delta is within tolerance: 86.652362ms
	I0410 22:48:26.977366   57719 start.go:83] releasing machines lock for "old-k8s-version-862528", held for 20.504554043s
	I0410 22:48:26.977386   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.977653   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetIP
	I0410 22:48:26.980035   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.980376   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.980419   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.980602   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.981224   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.981421   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.981516   57719 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:48:26.981558   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.981645   57719 ssh_runner.go:195] Run: cat /version.json
	I0410 22:48:26.981670   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.984375   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.984568   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.984840   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.984868   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.984953   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.985030   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.985079   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.985118   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.985236   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.985277   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.985374   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.985450   57719 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa Username:docker}
	I0410 22:48:26.985516   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.985635   57719 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa Username:docker}
	I0410 22:48:27.105002   57719 ssh_runner.go:195] Run: systemctl --version
	I0410 22:48:27.111205   57719 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:48:27.261678   57719 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 22:48:27.268336   57719 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:48:27.268423   57719 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:48:27.290099   57719 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0410 22:48:27.290122   57719 start.go:494] detecting cgroup driver to use...
	I0410 22:48:27.290174   57719 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:48:27.308787   57719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:48:27.325557   57719 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:48:27.325611   57719 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:48:27.340859   57719 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:48:27.355570   57719 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:48:27.479670   57719 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:48:27.653364   57719 docker.go:233] disabling docker service ...
	I0410 22:48:27.653424   57719 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:48:27.669775   57719 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:48:27.683654   57719 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:48:27.813212   57719 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:48:27.929620   57719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:48:27.946085   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:48:27.966341   57719 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0410 22:48:27.966404   57719 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:27.978022   57719 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:48:27.978111   57719 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:27.989324   57719 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:28.001429   57719 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:28.012965   57719 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:48:28.024663   57719 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:48:28.034362   57719 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0410 22:48:28.034423   57719 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0410 22:48:28.048740   57719 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:48:28.060698   57719 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:48:28.188526   57719 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:48:28.348442   57719 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 22:48:28.348523   57719 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 22:48:28.353501   57719 start.go:562] Will wait 60s for crictl version
	I0410 22:48:28.353566   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:28.357486   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 22:48:28.391138   57719 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 22:48:28.391221   57719 ssh_runner.go:195] Run: crio --version
	I0410 22:48:28.421399   57719 ssh_runner.go:195] Run: crio --version
	I0410 22:48:28.455851   57719 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0410 22:48:28.457534   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetIP
	I0410 22:48:28.460913   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:28.461297   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:28.461323   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:28.461558   57719 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0410 22:48:28.466450   57719 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:48:28.480549   57719 kubeadm.go:877] updating cluster {Name:old-k8s-version-862528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-862528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.178 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 22:48:28.480671   57719 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0410 22:48:28.480775   57719 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:48:28.536971   57719 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0410 22:48:28.537034   57719 ssh_runner.go:195] Run: which lz4
	I0410 22:48:28.541757   57719 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0410 22:48:28.546381   57719 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0410 22:48:28.546413   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0410 22:48:30.411805   57719 crio.go:462] duration metric: took 1.870076139s to copy over tarball
	I0410 22:48:30.411900   57719 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0410 22:48:28.229217   58186 main.go:141] libmachine: (embed-certs-706500) Waiting to get IP...
	I0410 22:48:28.230257   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:28.230673   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:28.230724   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:28.230643   59155 retry.go:31] will retry after 262.296498ms: waiting for machine to come up
	I0410 22:48:28.494117   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:28.494631   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:28.494660   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:28.494584   59155 retry.go:31] will retry after 237.287095ms: waiting for machine to come up
	I0410 22:48:28.733250   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:28.733795   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:28.733817   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:28.733755   59155 retry.go:31] will retry after 387.436239ms: waiting for machine to come up
	I0410 22:48:29.123585   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:29.124128   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:29.124163   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:29.124073   59155 retry.go:31] will retry after 428.418916ms: waiting for machine to come up
	I0410 22:48:29.554781   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:29.555244   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:29.555285   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:29.555235   59155 retry.go:31] will retry after 683.194159ms: waiting for machine to come up
	I0410 22:48:30.239955   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:30.240385   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:30.240463   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:30.240365   59155 retry.go:31] will retry after 764.240086ms: waiting for machine to come up
	I0410 22:48:31.006294   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:31.006789   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:31.006816   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:31.006750   59155 retry.go:31] will retry after 1.113674235s: waiting for machine to come up
	I0410 22:48:33.358026   57719 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.946092727s)
	I0410 22:48:33.358059   57719 crio.go:469] duration metric: took 2.946222933s to extract the tarball
	I0410 22:48:33.358069   57719 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0410 22:48:33.402924   57719 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:48:33.441006   57719 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0410 22:48:33.441033   57719 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0410 22:48:33.441090   57719 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:48:33.441142   57719 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.441203   57719 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.441210   57719 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.441318   57719 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0410 22:48:33.441339   57719 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.441375   57719 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.441395   57719 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.442645   57719 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.442667   57719 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.442706   57719 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.442717   57719 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0410 22:48:33.442796   57719 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.442807   57719 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.442814   57719 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:48:33.442866   57719 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.651119   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.652634   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.665548   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.669396   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.672510   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.674137   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0410 22:48:33.686915   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.756592   57719 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0410 22:48:33.756639   57719 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.756696   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.756696   57719 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0410 22:48:33.756789   57719 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.756810   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867043   57719 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0410 22:48:33.867061   57719 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0410 22:48:33.867090   57719 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.867091   57719 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.867135   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867166   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867185   57719 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0410 22:48:33.867220   57719 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.867252   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867261   57719 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0410 22:48:33.867303   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.867311   57719 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0410 22:48:33.867355   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867359   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.867286   57719 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0410 22:48:33.867452   57719 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.867481   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.871719   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.881086   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.964827   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.964854   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0410 22:48:33.964932   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0410 22:48:33.964948   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.976084   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0410 22:48:33.976155   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0410 22:48:33.976205   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0410 22:48:34.011460   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0410 22:48:34.038739   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0410 22:48:34.038739   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0410 22:48:34.289751   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:48:34.429542   57719 cache_images.go:92] duration metric: took 988.487885ms to LoadCachedImages
	W0410 22:48:34.429636   57719 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0410 22:48:34.429665   57719 kubeadm.go:928] updating node { 192.168.61.178 8443 v1.20.0 crio true true} ...
	I0410 22:48:34.429782   57719 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-862528 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.178
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-862528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 22:48:34.429870   57719 ssh_runner.go:195] Run: crio config
	I0410 22:48:34.478794   57719 cni.go:84] Creating CNI manager for ""
	I0410 22:48:34.478829   57719 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:48:34.478845   57719 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 22:48:34.478868   57719 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.178 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-862528 NodeName:old-k8s-version-862528 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.178"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.178 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0410 22:48:34.479065   57719 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.178
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-862528"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.178
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.178"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 22:48:34.479147   57719 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0410 22:48:34.489950   57719 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 22:48:34.490007   57719 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 22:48:34.500261   57719 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0410 22:48:34.517530   57719 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0410 22:48:34.534814   57719 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0410 22:48:34.552669   57719 ssh_runner.go:195] Run: grep 192.168.61.178	control-plane.minikube.internal$ /etc/hosts
	I0410 22:48:34.556612   57719 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.178	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:48:34.569643   57719 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:48:34.700791   57719 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:48:34.719682   57719 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528 for IP: 192.168.61.178
	I0410 22:48:34.719703   57719 certs.go:194] generating shared ca certs ...
	I0410 22:48:34.719722   57719 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:48:34.719900   57719 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 22:48:34.719951   57719 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 22:48:34.719965   57719 certs.go:256] generating profile certs ...
	I0410 22:48:34.720091   57719 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/client.key
	I0410 22:48:34.720155   57719 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.key.a46c310c
	I0410 22:48:34.720199   57719 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/proxy-client.key
	I0410 22:48:34.720337   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 22:48:34.720376   57719 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 22:48:34.720386   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 22:48:34.720438   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 22:48:34.720472   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 22:48:34.720502   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 22:48:34.720557   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:48:34.721238   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 22:48:34.769810   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 22:48:34.805397   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 22:48:34.846743   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 22:48:34.888720   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0410 22:48:34.915958   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0410 22:48:34.962182   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 22:48:34.992444   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0410 22:48:35.023525   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 22:48:35.051098   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 22:48:35.077305   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 22:48:35.102172   57719 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 22:48:35.121381   57719 ssh_runner.go:195] Run: openssl version
	I0410 22:48:35.127869   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 22:48:35.140056   57719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:35.145172   57719 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:35.145242   57719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:35.152081   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 22:48:35.164621   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 22:48:35.176511   57719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 22:48:35.182164   57719 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:48:35.182217   57719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 22:48:35.188968   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 22:48:35.201491   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 22:48:35.213468   57719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 22:48:35.218519   57719 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:48:35.218586   57719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 22:48:35.224872   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 22:48:35.236964   57719 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:48:35.242262   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0410 22:48:35.249245   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0410 22:48:35.256301   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0410 22:48:35.263359   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0410 22:48:35.270166   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0410 22:48:35.276953   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0410 22:48:35.283529   57719 kubeadm.go:391] StartCluster: {Name:old-k8s-version-862528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-862528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.178 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:48:35.283643   57719 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 22:48:35.283700   57719 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:48:35.328461   57719 cri.go:89] found id: ""
	I0410 22:48:35.328532   57719 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0410 22:48:35.340207   57719 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0410 22:48:35.340235   57719 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0410 22:48:35.340245   57719 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0410 22:48:35.340293   57719 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0410 22:48:35.351212   57719 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0410 22:48:35.352189   57719 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-862528" does not appear in /home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:48:35.352989   57719 kubeconfig.go:62] /home/jenkins/minikube-integration/18610-5679/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-862528" cluster setting kubeconfig missing "old-k8s-version-862528" context setting]
	I0410 22:48:35.353956   57719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/kubeconfig: {Name:mkd6f498bec10d7b0d2291f83f5e27766227bc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:48:32.122313   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:32.122773   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:32.122816   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:32.122717   59155 retry.go:31] will retry after 1.052378413s: waiting for machine to come up
	I0410 22:48:33.176207   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:33.176621   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:33.176665   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:33.176568   59155 retry.go:31] will retry after 1.548572633s: waiting for machine to come up
	I0410 22:48:34.726554   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:34.726992   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:34.727020   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:34.726938   59155 retry.go:31] will retry after 1.800911659s: waiting for machine to come up
	I0410 22:48:36.529629   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:36.530133   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:36.530164   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:36.530085   59155 retry.go:31] will retry after 2.434743044s: waiting for machine to come up
	I0410 22:48:35.428830   57719 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0410 22:48:35.479813   57719 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.178
	I0410 22:48:35.479853   57719 kubeadm.go:1154] stopping kube-system containers ...
	I0410 22:48:35.479882   57719 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0410 22:48:35.479940   57719 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:48:35.520506   57719 cri.go:89] found id: ""
	I0410 22:48:35.520577   57719 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0410 22:48:35.538167   57719 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:48:35.548571   57719 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:48:35.548600   57719 kubeadm.go:156] found existing configuration files:
	
	I0410 22:48:35.548662   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:48:35.558559   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:48:35.558612   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:48:35.568950   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:48:35.578644   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:48:35.578712   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:48:35.589075   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:48:35.600265   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:48:35.600321   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:48:35.611459   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:48:35.621712   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:48:35.621785   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:48:35.632133   57719 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:48:35.643494   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:35.775309   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:37.133286   57719 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.35793645s)
	I0410 22:48:37.133334   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:37.368687   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:37.497136   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:37.584652   57719 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:48:37.584744   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:38.085293   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:38.585489   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:39.084919   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:39.584951   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:40.085144   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:38.966866   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:38.967360   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:38.967383   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:38.967339   59155 retry.go:31] will retry after 3.219302627s: waiting for machine to come up
	I0410 22:48:40.585356   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:41.084839   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:41.585434   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:42.085797   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:42.585578   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:43.085621   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:43.585581   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:44.085251   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:44.584785   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:45.085394   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:46.409467   58701 start.go:364] duration metric: took 1m58.907071516s to acquireMachinesLock for "default-k8s-diff-port-519831"
	I0410 22:48:46.409536   58701 start.go:96] Skipping create...Using existing machine configuration
	I0410 22:48:46.409557   58701 fix.go:54] fixHost starting: 
	I0410 22:48:46.410030   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:48:46.410080   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:48:46.427877   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35587
	I0410 22:48:46.428357   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:48:46.428836   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:48:46.428858   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:48:46.429163   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:48:46.429354   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:48:46.429494   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetState
	I0410 22:48:46.431151   58701 fix.go:112] recreateIfNeeded on default-k8s-diff-port-519831: state=Stopped err=<nil>
	I0410 22:48:46.431192   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	W0410 22:48:46.431372   58701 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 22:48:46.433597   58701 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-519831" ...
	I0410 22:48:42.187835   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:42.188266   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:42.188305   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:42.188191   59155 retry.go:31] will retry after 2.924293511s: waiting for machine to come up
	I0410 22:48:45.113669   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.114211   58186 main.go:141] libmachine: (embed-certs-706500) Found IP for machine: 192.168.39.10
	I0410 22:48:45.114229   58186 main.go:141] libmachine: (embed-certs-706500) Reserving static IP address...
	I0410 22:48:45.114243   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has current primary IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.114685   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "embed-certs-706500", mac: "52:54:00:36:c4:8c", ip: "192.168.39.10"} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.114711   58186 main.go:141] libmachine: (embed-certs-706500) DBG | skip adding static IP to network mk-embed-certs-706500 - found existing host DHCP lease matching {name: "embed-certs-706500", mac: "52:54:00:36:c4:8c", ip: "192.168.39.10"}
	I0410 22:48:45.114721   58186 main.go:141] libmachine: (embed-certs-706500) Reserved static IP address: 192.168.39.10
	I0410 22:48:45.114728   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Getting to WaitForSSH function...
	I0410 22:48:45.114743   58186 main.go:141] libmachine: (embed-certs-706500) Waiting for SSH to be available...
	I0410 22:48:45.116708   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.116963   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.117007   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.117139   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Using SSH client type: external
	I0410 22:48:45.117167   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Using SSH private key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa (-rw-------)
	I0410 22:48:45.117198   58186 main.go:141] libmachine: (embed-certs-706500) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0410 22:48:45.117224   58186 main.go:141] libmachine: (embed-certs-706500) DBG | About to run SSH command:
	I0410 22:48:45.117236   58186 main.go:141] libmachine: (embed-certs-706500) DBG | exit 0
	I0410 22:48:45.240518   58186 main.go:141] libmachine: (embed-certs-706500) DBG | SSH cmd err, output: <nil>: 
	I0410 22:48:45.240843   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetConfigRaw
	I0410 22:48:45.241532   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetIP
	I0410 22:48:45.243908   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.244293   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.244317   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.244576   58186 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/config.json ...
	I0410 22:48:45.244775   58186 machine.go:94] provisionDockerMachine start ...
	I0410 22:48:45.244799   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:45.245004   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.247248   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.247639   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.247665   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.247859   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:45.248039   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.248217   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.248375   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:45.248543   58186 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:45.248746   58186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0410 22:48:45.248766   58186 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 22:48:45.357146   58186 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0410 22:48:45.357177   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetMachineName
	I0410 22:48:45.357428   58186 buildroot.go:166] provisioning hostname "embed-certs-706500"
	I0410 22:48:45.357447   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetMachineName
	I0410 22:48:45.357624   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.360299   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.360700   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.360796   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.360838   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:45.361049   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.361183   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.361367   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:45.361537   58186 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:45.361702   58186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0410 22:48:45.361716   58186 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-706500 && echo "embed-certs-706500" | sudo tee /etc/hostname
	I0410 22:48:45.487121   58186 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-706500
	
	I0410 22:48:45.487160   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.490242   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.490597   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.490625   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.490805   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:45.491004   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.491204   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.491359   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:45.491576   58186 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:45.491792   58186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0410 22:48:45.491824   58186 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-706500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-706500/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-706500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:48:45.606186   58186 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:48:45.606212   58186 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:48:45.606246   58186 buildroot.go:174] setting up certificates
	I0410 22:48:45.606257   58186 provision.go:84] configureAuth start
	I0410 22:48:45.606269   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetMachineName
	I0410 22:48:45.606594   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetIP
	I0410 22:48:45.609459   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.609893   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.609932   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.610134   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.612631   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.612945   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.612979   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.613144   58186 provision.go:143] copyHostCerts
	I0410 22:48:45.613193   58186 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:48:45.613207   58186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:48:45.613262   58186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:48:45.613378   58186 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:48:45.613393   58186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:48:45.613427   58186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:48:45.613495   58186 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:48:45.613505   58186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:48:45.613529   58186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:48:45.613592   58186 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.embed-certs-706500 san=[127.0.0.1 192.168.39.10 embed-certs-706500 localhost minikube]
	I0410 22:48:45.737049   58186 provision.go:177] copyRemoteCerts
	I0410 22:48:45.737105   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:48:45.737129   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.739712   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.740060   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.740089   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.740347   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:45.740589   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.740763   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:45.740957   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:48:45.828677   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:48:45.854080   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0410 22:48:45.878704   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0410 22:48:45.902611   58186 provision.go:87] duration metric: took 296.343353ms to configureAuth
	I0410 22:48:45.902640   58186 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:48:45.902879   58186 config.go:182] Loaded profile config "embed-certs-706500": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:48:45.902962   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.905588   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.905950   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.905972   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.906165   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:45.906360   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.906473   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.906561   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:45.906725   58186 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:45.906887   58186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0410 22:48:45.906911   58186 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:48:46.172772   58186 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:48:46.172807   58186 machine.go:97] duration metric: took 928.014662ms to provisionDockerMachine
	I0410 22:48:46.172823   58186 start.go:293] postStartSetup for "embed-certs-706500" (driver="kvm2")
	I0410 22:48:46.172836   58186 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:48:46.172877   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:46.173197   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:48:46.173223   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:46.176113   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.176465   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:46.176495   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.176679   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:46.176896   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:46.177118   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:46.177328   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:48:46.260470   58186 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:48:46.265003   58186 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:48:46.265030   58186 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:48:46.265088   58186 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:48:46.265158   58186 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:48:46.265241   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:48:46.274931   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:48:46.300036   58186 start.go:296] duration metric: took 127.199834ms for postStartSetup
	I0410 22:48:46.300082   58186 fix.go:56] duration metric: took 19.322550114s for fixHost
	I0410 22:48:46.300108   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:46.302945   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.303252   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:46.303279   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.303479   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:46.303700   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:46.303861   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:46.303990   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:46.304140   58186 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:46.304308   58186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0410 22:48:46.304318   58186 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 22:48:46.409294   58186 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712789326.385898055
	
	I0410 22:48:46.409317   58186 fix.go:216] guest clock: 1712789326.385898055
	I0410 22:48:46.409327   58186 fix.go:229] Guest: 2024-04-10 22:48:46.385898055 +0000 UTC Remote: 2024-04-10 22:48:46.300087658 +0000 UTC m=+229.287947250 (delta=85.810397ms)
	I0410 22:48:46.409352   58186 fix.go:200] guest clock delta is within tolerance: 85.810397ms
	I0410 22:48:46.409360   58186 start.go:83] releasing machines lock for "embed-certs-706500", held for 19.431860062s
	I0410 22:48:46.409389   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:46.409752   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetIP
	I0410 22:48:46.412201   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.412616   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:46.412651   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.412790   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:46.413361   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:46.413559   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:46.413617   58186 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:48:46.413665   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:46.413796   58186 ssh_runner.go:195] Run: cat /version.json
	I0410 22:48:46.413831   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:46.416879   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.417224   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:46.417248   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.417268   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.417428   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:46.417630   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:46.417811   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:46.417835   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:46.417858   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.417938   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:48:46.418030   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:46.418154   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:46.418284   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:46.418463   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:48:46.529204   58186 ssh_runner.go:195] Run: systemctl --version
	I0410 22:48:46.535396   58186 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:48:46.681100   58186 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 22:48:46.687278   58186 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:48:46.687340   58186 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:48:46.703105   58186 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0410 22:48:46.703128   58186 start.go:494] detecting cgroup driver to use...
	I0410 22:48:46.703191   58186 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:48:46.719207   58186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:48:46.733444   58186 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:48:46.733509   58186 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:48:46.747369   58186 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:48:46.762231   58186 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:48:46.874897   58186 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:48:47.023672   58186 docker.go:233] disabling docker service ...
	I0410 22:48:47.023749   58186 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:48:47.038963   58186 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:48:47.053827   58186 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:48:46.435268   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Start
	I0410 22:48:46.435498   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Ensuring networks are active...
	I0410 22:48:46.436266   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Ensuring network default is active
	I0410 22:48:46.436691   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Ensuring network mk-default-k8s-diff-port-519831 is active
	I0410 22:48:46.437163   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Getting domain xml...
	I0410 22:48:46.437799   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Creating domain...
	I0410 22:48:47.206641   58186 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:48:47.363331   58186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:48:47.380657   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:48:47.402234   58186 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0410 22:48:47.402306   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.419356   58186 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:48:47.419417   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.435320   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.450812   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.462588   58186 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:48:47.474323   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.494156   58186 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.515195   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.526148   58186 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:48:47.536045   58186 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0410 22:48:47.536106   58186 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0410 22:48:47.549556   58186 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:48:47.567236   58186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:48:47.702628   58186 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:48:47.848908   58186 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 22:48:47.849000   58186 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 22:48:47.854126   58186 start.go:562] Will wait 60s for crictl version
	I0410 22:48:47.854191   58186 ssh_runner.go:195] Run: which crictl
	I0410 22:48:47.858095   58186 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 22:48:47.897714   58186 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 22:48:47.897805   58186 ssh_runner.go:195] Run: crio --version
	I0410 22:48:47.927597   58186 ssh_runner.go:195] Run: crio --version
	I0410 22:48:47.958357   58186 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0410 22:48:45.584769   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:46.085396   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:46.585857   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:47.085186   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:47.585668   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:48.085585   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:48.585617   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:49.085227   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:49.585626   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:50.084900   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:47.959811   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetIP
	I0410 22:48:47.962805   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:47.963246   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:47.963276   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:47.963510   58186 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0410 22:48:47.967753   58186 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:48:47.981154   58186 kubeadm.go:877] updating cluster {Name:embed-certs-706500 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-706500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 22:48:47.981258   58186 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 22:48:47.981298   58186 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:48:48.018208   58186 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0410 22:48:48.018274   58186 ssh_runner.go:195] Run: which lz4
	I0410 22:48:48.023613   58186 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0410 22:48:48.029036   58186 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0410 22:48:48.029063   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0410 22:48:49.637729   58186 crio.go:462] duration metric: took 1.61414003s to copy over tarball
	I0410 22:48:49.637796   58186 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0410 22:48:52.046454   58186 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.408634496s)
	I0410 22:48:52.046482   58186 crio.go:469] duration metric: took 2.408728343s to extract the tarball
	I0410 22:48:52.046489   58186 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0410 22:48:47.701355   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting to get IP...
	I0410 22:48:47.702406   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:47.702994   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:47.703067   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:47.702962   59362 retry.go:31] will retry after 292.834608ms: waiting for machine to come up
	I0410 22:48:47.997294   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:47.997757   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:47.997785   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:47.997701   59362 retry.go:31] will retry after 341.35168ms: waiting for machine to come up
	I0410 22:48:48.340842   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:48.341347   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:48.341379   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:48.341279   59362 retry.go:31] will retry after 438.041848ms: waiting for machine to come up
	I0410 22:48:48.780565   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:48.781092   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:48.781116   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:48.781038   59362 retry.go:31] will retry after 557.770882ms: waiting for machine to come up
	I0410 22:48:49.340858   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:49.341330   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:49.341354   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:49.341282   59362 retry.go:31] will retry after 637.316206ms: waiting for machine to come up
	I0410 22:48:49.980256   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:49.980737   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:49.980761   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:49.980696   59362 retry.go:31] will retry after 909.873955ms: waiting for machine to come up
	I0410 22:48:50.891776   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:50.892197   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:50.892229   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:50.892147   59362 retry.go:31] will retry after 745.06949ms: waiting for machine to come up
	I0410 22:48:51.638436   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:51.638907   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:51.638933   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:51.638854   59362 retry.go:31] will retry after 1.060037191s: waiting for machine to come up
	I0410 22:48:50.585691   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:51.085669   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:51.585308   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:52.085393   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:52.585619   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:53.085643   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:53.585076   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:54.085251   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:54.585027   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:55.085629   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:52.087135   58186 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:48:52.139368   58186 crio.go:514] all images are preloaded for cri-o runtime.
	I0410 22:48:52.139389   58186 cache_images.go:84] Images are preloaded, skipping loading
	I0410 22:48:52.139397   58186 kubeadm.go:928] updating node { 192.168.39.10 8443 v1.29.3 crio true true} ...
	I0410 22:48:52.139535   58186 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-706500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-706500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 22:48:52.139629   58186 ssh_runner.go:195] Run: crio config
	I0410 22:48:52.193347   58186 cni.go:84] Creating CNI manager for ""
	I0410 22:48:52.193375   58186 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:48:52.193390   58186 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 22:48:52.193429   58186 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-706500 NodeName:embed-certs-706500 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0410 22:48:52.193606   58186 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-706500"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 22:48:52.193686   58186 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0410 22:48:52.206450   58186 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 22:48:52.206507   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 22:48:52.218898   58186 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0410 22:48:52.239285   58186 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0410 22:48:52.257083   58186 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0410 22:48:52.275448   58186 ssh_runner.go:195] Run: grep 192.168.39.10	control-plane.minikube.internal$ /etc/hosts
	I0410 22:48:52.279486   58186 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:48:52.293308   58186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:48:52.428424   58186 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:48:52.446713   58186 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500 for IP: 192.168.39.10
	I0410 22:48:52.446738   58186 certs.go:194] generating shared ca certs ...
	I0410 22:48:52.446759   58186 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:48:52.446937   58186 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 22:48:52.446980   58186 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 22:48:52.446990   58186 certs.go:256] generating profile certs ...
	I0410 22:48:52.447059   58186 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/client.key
	I0410 22:48:52.447124   58186 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/apiserver.key.f3045f1a
	I0410 22:48:52.447156   58186 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/proxy-client.key
	I0410 22:48:52.447294   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 22:48:52.447328   58186 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 22:48:52.447335   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 22:48:52.447354   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 22:48:52.447374   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 22:48:52.447405   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 22:48:52.447457   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:48:52.448166   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 22:48:52.481862   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 22:48:52.530983   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 22:48:52.572191   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 22:48:52.614466   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0410 22:48:52.644331   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0410 22:48:52.672811   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 22:48:52.698376   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0410 22:48:52.723998   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 22:48:52.749405   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 22:48:52.777529   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 22:48:52.803663   58186 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 22:48:52.822234   58186 ssh_runner.go:195] Run: openssl version
	I0410 22:48:52.830835   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 22:48:52.843425   58186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 22:48:52.848384   58186 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:48:52.848444   58186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 22:48:52.854869   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 22:48:52.867228   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 22:48:52.879319   58186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 22:48:52.884241   58186 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:48:52.884324   58186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 22:48:52.890349   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 22:48:52.902398   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 22:48:52.913996   58186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:52.918757   58186 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:52.918824   58186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:52.924669   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 22:48:52.936581   58186 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:48:52.941242   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0410 22:48:52.947526   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0410 22:48:52.953939   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0410 22:48:52.960447   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0410 22:48:52.966829   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0410 22:48:52.973148   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0410 22:48:52.979557   58186 kubeadm.go:391] StartCluster: {Name:embed-certs-706500 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-706500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:48:52.979669   58186 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 22:48:52.979744   58186 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:48:53.018394   58186 cri.go:89] found id: ""
	I0410 22:48:53.018479   58186 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0410 22:48:53.030088   58186 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0410 22:48:53.030112   58186 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0410 22:48:53.030118   58186 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0410 22:48:53.030184   58186 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0410 22:48:53.041035   58186 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0410 22:48:53.042312   58186 kubeconfig.go:125] found "embed-certs-706500" server: "https://192.168.39.10:8443"
	I0410 22:48:53.044306   58186 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0410 22:48:53.054911   58186 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.10
	I0410 22:48:53.054948   58186 kubeadm.go:1154] stopping kube-system containers ...
	I0410 22:48:53.054974   58186 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0410 22:48:53.055020   58186 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:48:53.093035   58186 cri.go:89] found id: ""
	I0410 22:48:53.093109   58186 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0410 22:48:53.111257   58186 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:48:53.122098   58186 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:48:53.122125   58186 kubeadm.go:156] found existing configuration files:
	
	I0410 22:48:53.122176   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:48:53.133513   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:48:53.133587   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:48:53.144275   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:48:53.154921   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:48:53.155000   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:48:53.165604   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:48:53.175520   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:48:53.175582   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:48:53.186094   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:48:53.196086   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:48:53.196156   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:48:53.206564   58186 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:48:53.217180   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:53.336883   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:54.151708   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:54.367165   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:54.457694   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:54.572579   58186 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:48:54.572693   58186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:55.073196   58186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:55.572865   58186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:55.595374   58186 api_server.go:72] duration metric: took 1.022777759s to wait for apiserver process to appear ...
	I0410 22:48:55.595403   58186 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:48:55.595424   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:48:52.701137   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:52.701574   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:52.701606   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:52.701529   59362 retry.go:31] will retry after 1.792719263s: waiting for machine to come up
	I0410 22:48:54.496380   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:54.496793   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:54.496823   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:54.496740   59362 retry.go:31] will retry after 2.321115222s: waiting for machine to come up
	I0410 22:48:56.819654   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:56.820107   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:56.820140   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:56.820072   59362 retry.go:31] will retry after 2.57309135s: waiting for machine to come up
	I0410 22:48:55.585506   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:56.084892   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:56.585876   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:57.085775   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:57.585260   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:58.085635   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:58.585588   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:59.085661   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:59.585663   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:00.085635   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:58.843447   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0410 22:48:58.843487   58186 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0410 22:48:58.843504   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:48:58.962381   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:48:58.962431   58186 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:48:59.095611   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:48:59.100754   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:48:59.100781   58186 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:48:59.595968   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:48:59.606936   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:48:59.606977   58186 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:49:00.096182   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:49:00.106346   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:49:00.106388   58186 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:49:00.595923   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:49:00.600197   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I0410 22:49:00.609220   58186 api_server.go:141] control plane version: v1.29.3
	I0410 22:49:00.609246   58186 api_server.go:131] duration metric: took 5.013835577s to wait for apiserver health ...
	I0410 22:49:00.609256   58186 cni.go:84] Creating CNI manager for ""
	I0410 22:49:00.609263   58186 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:49:00.611220   58186 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0410 22:49:00.612765   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0410 22:49:00.625567   58186 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0410 22:49:00.648581   58186 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:49:00.657652   58186 system_pods.go:59] 8 kube-system pods found
	I0410 22:49:00.657688   58186 system_pods.go:61] "coredns-76f75df574-j4kj8" [1986e6b6-e6c7-4212-bdd5-10360a0b897c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0410 22:49:00.657696   58186 system_pods.go:61] "etcd-embed-certs-706500" [acbf9245-d4f8-4fa6-88a7-4f891f9f8403] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0410 22:49:00.657704   58186 system_pods.go:61] "kube-apiserver-embed-certs-706500" [b9c79d1d-f571-4ed8-a68f-512e8a2a1705] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0410 22:49:00.657709   58186 system_pods.go:61] "kube-controller-manager-embed-certs-706500" [d229b85d-9a8d-4cd0-ac48-a6aea3769581] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0410 22:49:00.657715   58186 system_pods.go:61] "kube-proxy-8kzff" [ce35a33f-1697-44a7-ad64-83895236bc6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0410 22:49:00.657720   58186 system_pods.go:61] "kube-scheduler-embed-certs-706500" [72c68a6c-beba-48a5-937b-51c40aab0386] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0410 22:49:00.657726   58186 system_pods.go:61] "metrics-server-57f55c9bc5-4r9pl" [40a91fc1-9e0a-4bcc-a2e9-65e9f2d2b960] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:49:00.657733   58186 system_pods.go:61] "storage-provisioner" [10f7637e-e6e0-4f04-b1eb-ac3bd205064f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0410 22:49:00.657742   58186 system_pods.go:74] duration metric: took 9.141859ms to wait for pod list to return data ...
	I0410 22:49:00.657752   58186 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:49:00.662255   58186 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:49:00.662300   58186 node_conditions.go:123] node cpu capacity is 2
	I0410 22:49:00.662315   58186 node_conditions.go:105] duration metric: took 4.553643ms to run NodePressure ...
	I0410 22:49:00.662338   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:00.957923   58186 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0410 22:49:00.962553   58186 kubeadm.go:733] kubelet initialised
	I0410 22:49:00.962575   58186 kubeadm.go:734] duration metric: took 4.616848ms waiting for restarted kubelet to initialise ...
	I0410 22:49:00.962585   58186 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:49:00.968387   58186 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-j4kj8" in "kube-system" namespace to be "Ready" ...
	I0410 22:48:59.395416   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:59.395864   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:59.395893   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:59.395819   59362 retry.go:31] will retry after 2.378137008s: waiting for machine to come up
	I0410 22:49:01.776037   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:01.776587   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:49:01.776641   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:49:01.776526   59362 retry.go:31] will retry after 4.360839049s: waiting for machine to come up
	I0410 22:49:00.585234   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:01.084884   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:01.585066   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:02.085697   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:02.585573   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:03.085552   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:03.585521   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:04.084919   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:04.584802   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:05.085266   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:02.975009   58186 pod_ready.go:102] pod "coredns-76f75df574-j4kj8" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:04.976854   58186 pod_ready.go:102] pod "coredns-76f75df574-j4kj8" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:06.141509   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.142008   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Found IP for machine: 192.168.72.170
	I0410 22:49:06.142037   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has current primary IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.142047   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Reserving static IP address...
	I0410 22:49:06.142422   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Reserved static IP address: 192.168.72.170
	I0410 22:49:06.142451   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for SSH to be available...
	I0410 22:49:06.142476   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-519831", mac: "52:54:00:dc:67:d5", ip: "192.168.72.170"} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.142499   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | skip adding static IP to network mk-default-k8s-diff-port-519831 - found existing host DHCP lease matching {name: "default-k8s-diff-port-519831", mac: "52:54:00:dc:67:d5", ip: "192.168.72.170"}
	I0410 22:49:06.142518   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Getting to WaitForSSH function...
	I0410 22:49:06.144878   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.145206   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.145238   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.145326   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Using SSH client type: external
	I0410 22:49:06.145365   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Using SSH private key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa (-rw-------)
	I0410 22:49:06.145401   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.170 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0410 22:49:06.145421   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | About to run SSH command:
	I0410 22:49:06.145438   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | exit 0
	I0410 22:49:06.272546   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | SSH cmd err, output: <nil>: 
	I0410 22:49:06.272919   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetConfigRaw
	I0410 22:49:06.273605   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetIP
	I0410 22:49:06.276234   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.276610   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.276644   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.276851   58701 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/config.json ...
	I0410 22:49:06.277100   58701 machine.go:94] provisionDockerMachine start ...
	I0410 22:49:06.277127   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:06.277400   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:06.279729   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.280107   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.280146   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.280295   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:06.280480   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.280658   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.280794   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:06.280939   58701 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:06.281121   58701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0410 22:49:06.281138   58701 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 22:49:06.385219   58701 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0410 22:49:06.385254   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetMachineName
	I0410 22:49:06.385498   58701 buildroot.go:166] provisioning hostname "default-k8s-diff-port-519831"
	I0410 22:49:06.385527   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetMachineName
	I0410 22:49:06.385716   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:06.388422   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.388922   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.388963   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.389072   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:06.389292   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.389462   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.389600   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:06.389751   58701 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:06.389924   58701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0410 22:49:06.389938   58701 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-519831 && echo "default-k8s-diff-port-519831" | sudo tee /etc/hostname
	I0410 22:49:06.507221   58701 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-519831
	
	I0410 22:49:06.507252   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:06.509837   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.510179   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.510225   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.510385   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:06.510561   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.510736   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.510880   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:06.511040   58701 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:06.511236   58701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0410 22:49:06.511262   58701 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-519831' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-519831/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-519831' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:49:06.626097   58701 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:49:06.626129   58701 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:49:06.626153   58701 buildroot.go:174] setting up certificates
	I0410 22:49:06.626163   58701 provision.go:84] configureAuth start
	I0410 22:49:06.626173   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetMachineName
	I0410 22:49:06.626499   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetIP
	I0410 22:49:06.629067   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.629412   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.629450   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.629559   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:06.632132   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.632517   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.632548   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.632674   58701 provision.go:143] copyHostCerts
	I0410 22:49:06.632734   58701 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:49:06.632755   58701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:49:06.632822   58701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:49:06.633021   58701 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:49:06.633037   58701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:49:06.633078   58701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:49:06.633179   58701 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:49:06.633191   58701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:49:06.633223   58701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:49:06.633295   58701 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-519831 san=[127.0.0.1 192.168.72.170 default-k8s-diff-port-519831 localhost minikube]
	I0410 22:49:06.835016   58701 provision.go:177] copyRemoteCerts
	I0410 22:49:06.835077   58701 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:49:06.835104   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:06.837769   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.838124   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.838152   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.838327   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:06.838519   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.838669   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:06.838808   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:06.921929   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:49:06.947855   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0410 22:49:06.972865   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0410 22:49:06.999630   58701 provision.go:87] duration metric: took 373.45654ms to configureAuth
	I0410 22:49:06.999658   58701 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:49:06.999872   58701 config.go:182] Loaded profile config "default-k8s-diff-port-519831": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:49:06.999942   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:07.003015   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.003418   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.003452   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.003623   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:07.003793   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.003946   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.004062   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:07.004208   58701 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:07.004425   58701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0410 22:49:07.004448   58701 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:49:07.273568   58701 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:49:07.273601   58701 machine.go:97] duration metric: took 996.483382ms to provisionDockerMachine
	I0410 22:49:07.273618   58701 start.go:293] postStartSetup for "default-k8s-diff-port-519831" (driver="kvm2")
	I0410 22:49:07.273634   58701 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:49:07.273660   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:07.274009   58701 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:49:07.274040   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:07.276736   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.277132   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.277155   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.277354   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:07.277537   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.277740   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:07.277891   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:07.361056   58701 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:49:07.365729   58701 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:49:07.365759   58701 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:49:07.365834   58701 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:49:07.365935   58701 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:49:07.366064   58701 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:49:07.376754   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:49:07.509384   57270 start.go:364] duration metric: took 56.035567079s to acquireMachinesLock for "no-preload-646133"
	I0410 22:49:07.509424   57270 start.go:96] Skipping create...Using existing machine configuration
	I0410 22:49:07.509432   57270 fix.go:54] fixHost starting: 
	I0410 22:49:07.509837   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:07.509872   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:07.526882   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46721
	I0410 22:49:07.527337   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:07.527780   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:49:07.527801   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:07.528077   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:07.528238   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:07.528366   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetState
	I0410 22:49:07.529732   57270 fix.go:112] recreateIfNeeded on no-preload-646133: state=Stopped err=<nil>
	I0410 22:49:07.529755   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	W0410 22:49:07.529878   57270 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 22:49:07.531875   57270 out.go:177] * Restarting existing kvm2 VM for "no-preload-646133" ...
	I0410 22:49:07.402691   58701 start.go:296] duration metric: took 129.059293ms for postStartSetup
	I0410 22:49:07.402731   58701 fix.go:56] duration metric: took 20.99318672s for fixHost
	I0410 22:49:07.402751   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:07.405634   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.405955   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.405996   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.406161   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:07.406378   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.406537   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.406647   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:07.406826   58701 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:07.407062   58701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0410 22:49:07.407079   58701 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 22:49:07.509210   58701 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712789347.471050157
	
	I0410 22:49:07.509233   58701 fix.go:216] guest clock: 1712789347.471050157
	I0410 22:49:07.509241   58701 fix.go:229] Guest: 2024-04-10 22:49:07.471050157 +0000 UTC Remote: 2024-04-10 22:49:07.402735415 +0000 UTC m=+140.054227768 (delta=68.314742ms)
	I0410 22:49:07.509287   58701 fix.go:200] guest clock delta is within tolerance: 68.314742ms
	I0410 22:49:07.509297   58701 start.go:83] releasing machines lock for "default-k8s-diff-port-519831", held for 21.099785205s
	I0410 22:49:07.509328   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:07.509613   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetIP
	I0410 22:49:07.512255   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.512634   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.512667   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.512827   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:07.513364   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:07.513531   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:07.513610   58701 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:49:07.513649   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:07.513750   58701 ssh_runner.go:195] Run: cat /version.json
	I0410 22:49:07.513771   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:07.516338   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.516685   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.516776   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.516802   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.516951   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:07.517142   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.517161   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.517173   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.517310   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:07.517355   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:07.517470   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.517602   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:07.517604   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:07.517765   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:07.594218   58701 ssh_runner.go:195] Run: systemctl --version
	I0410 22:49:07.633783   58701 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:49:07.790430   58701 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 22:49:07.797279   58701 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:49:07.797358   58701 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:49:07.815457   58701 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0410 22:49:07.815488   58701 start.go:494] detecting cgroup driver to use...
	I0410 22:49:07.815561   58701 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:49:07.833038   58701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:49:07.848577   58701 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:49:07.848648   58701 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:49:07.863609   58701 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:49:07.878299   58701 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:49:07.999388   58701 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:49:08.155534   58701 docker.go:233] disabling docker service ...
	I0410 22:49:08.155613   58701 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:49:08.175545   58701 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:49:08.195923   58701 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:49:08.340282   58701 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:49:08.485647   58701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:49:08.500245   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:49:08.520493   58701 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0410 22:49:08.520582   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.535455   58701 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:49:08.535521   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.547058   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.559638   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.571374   58701 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:49:08.583796   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.598091   58701 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.622634   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.633858   58701 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:49:08.645114   58701 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0410 22:49:08.645167   58701 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0410 22:49:08.660204   58701 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:49:08.671345   58701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:49:08.804523   58701 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:49:08.953644   58701 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 22:49:08.953717   58701 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 22:49:08.958661   58701 start.go:562] Will wait 60s for crictl version
	I0410 22:49:08.958715   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:49:08.962938   58701 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 22:49:09.006335   58701 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 22:49:09.006425   58701 ssh_runner.go:195] Run: crio --version
	I0410 22:49:09.037315   58701 ssh_runner.go:195] Run: crio --version
	I0410 22:49:09.069366   58701 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0410 22:49:07.533174   57270 main.go:141] libmachine: (no-preload-646133) Calling .Start
	I0410 22:49:07.533352   57270 main.go:141] libmachine: (no-preload-646133) Ensuring networks are active...
	I0410 22:49:07.534117   57270 main.go:141] libmachine: (no-preload-646133) Ensuring network default is active
	I0410 22:49:07.534413   57270 main.go:141] libmachine: (no-preload-646133) Ensuring network mk-no-preload-646133 is active
	I0410 22:49:07.534851   57270 main.go:141] libmachine: (no-preload-646133) Getting domain xml...
	I0410 22:49:07.535553   57270 main.go:141] libmachine: (no-preload-646133) Creating domain...
	I0410 22:49:08.844990   57270 main.go:141] libmachine: (no-preload-646133) Waiting to get IP...
	I0410 22:49:08.845908   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:08.846363   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:08.846459   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:08.846332   59513 retry.go:31] will retry after 241.150391ms: waiting for machine to come up
	I0410 22:49:09.088961   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:09.089455   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:09.089489   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:09.089417   59513 retry.go:31] will retry after 349.96397ms: waiting for machine to come up
	I0410 22:49:09.441226   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:09.441799   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:09.441828   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:09.441754   59513 retry.go:31] will retry after 444.576999ms: waiting for machine to come up
	I0410 22:49:05.585408   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:06.085250   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:06.585503   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:07.085422   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:07.584909   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:08.084863   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:08.585859   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:09.085175   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:09.585660   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:10.085221   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:07.475385   58186 pod_ready.go:92] pod "coredns-76f75df574-j4kj8" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:07.475414   58186 pod_ready.go:81] duration metric: took 6.506993581s for pod "coredns-76f75df574-j4kj8" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:07.475424   58186 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:09.486133   58186 pod_ready.go:102] pod "etcd-embed-certs-706500" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:11.483972   58186 pod_ready.go:92] pod "etcd-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:11.483994   58186 pod_ready.go:81] duration metric: took 4.008564427s for pod "etcd-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.484005   58186 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.490340   58186 pod_ready.go:92] pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:11.490380   58186 pod_ready.go:81] duration metric: took 6.362017ms for pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.490399   58186 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.497078   58186 pod_ready.go:92] pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:11.497110   58186 pod_ready.go:81] duration metric: took 6.701645ms for pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.497124   58186 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8kzff" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.504091   58186 pod_ready.go:92] pod "kube-proxy-8kzff" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:11.504118   58186 pod_ready.go:81] duration metric: took 6.985136ms for pod "kube-proxy-8kzff" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.504132   58186 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.510619   58186 pod_ready.go:92] pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:11.510656   58186 pod_ready.go:81] duration metric: took 6.513031ms for pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.510674   58186 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:09.070592   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetIP
	I0410 22:49:09.073850   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:09.074163   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:09.074190   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:09.074388   58701 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0410 22:49:09.079170   58701 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:49:09.093764   58701 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-519831 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-519831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.170 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 22:49:09.093973   58701 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 22:49:09.094040   58701 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:49:09.140874   58701 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0410 22:49:09.140951   58701 ssh_runner.go:195] Run: which lz4
	I0410 22:49:09.146775   58701 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0410 22:49:09.152876   58701 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0410 22:49:09.152917   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0410 22:49:10.827934   58701 crio.go:462] duration metric: took 1.681191787s to copy over tarball
	I0410 22:49:10.828019   58701 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0410 22:49:09.888688   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:09.892576   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:09.892607   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:09.889179   59513 retry.go:31] will retry after 560.585608ms: waiting for machine to come up
	I0410 22:49:10.451001   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:10.451630   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:10.451663   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:10.451590   59513 retry.go:31] will retry after 601.519186ms: waiting for machine to come up
	I0410 22:49:11.054324   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:11.054664   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:11.054693   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:11.054653   59513 retry.go:31] will retry after 750.183717ms: waiting for machine to come up
	I0410 22:49:11.805908   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:11.806303   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:11.806331   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:11.806254   59513 retry.go:31] will retry after 883.805148ms: waiting for machine to come up
	I0410 22:49:12.691316   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:12.691861   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:12.691893   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:12.691804   59513 retry.go:31] will retry after 1.39605629s: waiting for machine to come up
	I0410 22:49:14.090350   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:14.090795   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:14.090821   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:14.090753   59513 retry.go:31] will retry after 1.388324423s: waiting for machine to come up
	I0410 22:49:10.585333   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:11.085466   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:11.585062   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:12.085191   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:12.585644   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:13.085615   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:13.585355   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:14.085270   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:14.584868   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:15.085639   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:13.521844   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:16.041569   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:13.328492   58701 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.500439721s)
	I0410 22:49:13.328534   58701 crio.go:469] duration metric: took 2.500564923s to extract the tarball
	I0410 22:49:13.328545   58701 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0410 22:49:13.367568   58701 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:49:13.415759   58701 crio.go:514] all images are preloaded for cri-o runtime.
	I0410 22:49:13.415780   58701 cache_images.go:84] Images are preloaded, skipping loading
	I0410 22:49:13.415788   58701 kubeadm.go:928] updating node { 192.168.72.170 8444 v1.29.3 crio true true} ...
	I0410 22:49:13.415899   58701 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-519831 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-519831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 22:49:13.415982   58701 ssh_runner.go:195] Run: crio config
	I0410 22:49:13.473019   58701 cni.go:84] Creating CNI manager for ""
	I0410 22:49:13.473046   58701 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:49:13.473063   58701 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 22:49:13.473100   58701 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.170 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-519831 NodeName:default-k8s-diff-port-519831 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0410 22:49:13.473261   58701 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.170
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-519831"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.170
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.170"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 22:49:13.473325   58701 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0410 22:49:13.487302   58701 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 22:49:13.487368   58701 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 22:49:13.498496   58701 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0410 22:49:13.518312   58701 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0410 22:49:13.537972   58701 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0410 22:49:13.558714   58701 ssh_runner.go:195] Run: grep 192.168.72.170	control-plane.minikube.internal$ /etc/hosts
	I0410 22:49:13.562886   58701 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.170	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:49:13.575957   58701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:49:13.706316   58701 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:49:13.725898   58701 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831 for IP: 192.168.72.170
	I0410 22:49:13.725924   58701 certs.go:194] generating shared ca certs ...
	I0410 22:49:13.725944   58701 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:49:13.726119   58701 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 22:49:13.726173   58701 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 22:49:13.726185   58701 certs.go:256] generating profile certs ...
	I0410 22:49:13.726297   58701 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/client.key
	I0410 22:49:13.726398   58701 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/apiserver.key.ff579077
	I0410 22:49:13.726454   58701 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/proxy-client.key
	I0410 22:49:13.726606   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 22:49:13.726644   58701 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 22:49:13.726656   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 22:49:13.726685   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 22:49:13.726725   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 22:49:13.726756   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 22:49:13.726811   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:49:13.727747   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 22:49:13.780060   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 22:49:13.818446   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 22:49:13.865986   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 22:49:13.897578   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0410 22:49:13.937123   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0410 22:49:13.970558   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 22:49:13.997678   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0410 22:49:14.025173   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 22:49:14.051190   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 22:49:14.079109   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 22:49:14.107547   58701 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 22:49:14.128029   58701 ssh_runner.go:195] Run: openssl version
	I0410 22:49:14.134686   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 22:49:14.148733   58701 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:14.154057   58701 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:14.154114   58701 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:14.160626   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 22:49:14.174406   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 22:49:14.187513   58701 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 22:49:14.193279   58701 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:49:14.193344   58701 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 22:49:14.199518   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 22:49:14.213538   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 22:49:14.225618   58701 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 22:49:14.230610   58701 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:49:14.230666   58701 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 22:49:14.236756   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 22:49:14.250041   58701 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:49:14.255320   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0410 22:49:14.262821   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0410 22:49:14.268854   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0410 22:49:14.275152   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0410 22:49:14.281598   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0410 22:49:14.287895   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0410 22:49:14.294125   58701 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-519831 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-519831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.170 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:49:14.294246   58701 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 22:49:14.294301   58701 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:49:14.332192   58701 cri.go:89] found id: ""
	I0410 22:49:14.332268   58701 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0410 22:49:14.343174   58701 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0410 22:49:14.343198   58701 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0410 22:49:14.343205   58701 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0410 22:49:14.343261   58701 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0410 22:49:14.355648   58701 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0410 22:49:14.357310   58701 kubeconfig.go:125] found "default-k8s-diff-port-519831" server: "https://192.168.72.170:8444"
	I0410 22:49:14.360713   58701 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0410 22:49:14.371972   58701 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.170
	I0410 22:49:14.372011   58701 kubeadm.go:1154] stopping kube-system containers ...
	I0410 22:49:14.372025   58701 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0410 22:49:14.372083   58701 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:49:14.410517   58701 cri.go:89] found id: ""
	I0410 22:49:14.410594   58701 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0410 22:49:14.428686   58701 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:49:14.443256   58701 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:49:14.443281   58701 kubeadm.go:156] found existing configuration files:
	
	I0410 22:49:14.443353   58701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0410 22:49:14.455086   58701 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:49:14.455156   58701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:49:14.466151   58701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0410 22:49:14.476799   58701 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:49:14.476852   58701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:49:14.487588   58701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0410 22:49:14.498476   58701 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:49:14.498534   58701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:49:14.509248   58701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0410 22:49:14.520223   58701 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:49:14.520287   58701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:49:14.531388   58701 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:49:14.542775   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:14.673733   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:15.773338   58701 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.099570437s)
	I0410 22:49:15.773385   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:15.985355   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:16.052996   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:16.126251   58701 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:49:16.126362   58701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:16.626615   58701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:17.127289   58701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:17.166269   58701 api_server.go:72] duration metric: took 1.040013076s to wait for apiserver process to appear ...
	I0410 22:49:17.166315   58701 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:49:17.166339   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:17.166964   58701 api_server.go:269] stopped: https://192.168.72.170:8444/healthz: Get "https://192.168.72.170:8444/healthz": dial tcp 192.168.72.170:8444: connect: connection refused
	I0410 22:49:15.480947   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:15.481358   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:15.481386   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:15.481309   59513 retry.go:31] will retry after 2.276682979s: waiting for machine to come up
	I0410 22:49:17.759404   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:17.759931   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:17.759975   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:17.759887   59513 retry.go:31] will retry after 2.254373826s: waiting for machine to come up
	I0410 22:49:15.585476   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:16.085404   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:16.585123   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:17.085713   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:17.584877   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:18.085601   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:18.585222   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:19.084891   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:19.585215   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:20.085668   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:18.519156   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:20.520053   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:17.667248   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:20.709507   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0410 22:49:20.709538   58701 api_server.go:103] status: https://192.168.72.170:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0410 22:49:20.709554   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:20.740392   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:49:20.740483   58701 api_server.go:103] status: https://192.168.72.170:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:49:21.166658   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:21.174343   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:49:21.174378   58701 api_server.go:103] status: https://192.168.72.170:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:49:21.667345   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:21.685078   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:49:21.685112   58701 api_server.go:103] status: https://192.168.72.170:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:49:22.166644   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:22.171611   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 200:
	ok
	I0410 22:49:22.178452   58701 api_server.go:141] control plane version: v1.29.3
	I0410 22:49:22.178484   58701 api_server.go:131] duration metric: took 5.012161431s to wait for apiserver health ...
	I0410 22:49:22.178493   58701 cni.go:84] Creating CNI manager for ""
	I0410 22:49:22.178499   58701 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:49:22.180370   58701 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0410 22:49:22.181768   58701 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0410 22:49:22.197462   58701 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0410 22:49:22.218348   58701 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:49:22.236800   58701 system_pods.go:59] 8 kube-system pods found
	I0410 22:49:22.236830   58701 system_pods.go:61] "coredns-76f75df574-ghnvx" [88ebd9b0-ecf0-4037-b5b0-547dad2354ba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0410 22:49:22.236837   58701 system_pods.go:61] "etcd-default-k8s-diff-port-519831" [e03c3075-b377-41a8-8f68-5a424fafd6a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0410 22:49:22.236843   58701 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-519831" [a538137b-08ce-4feb-a420-2e3ad7125b14] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0410 22:49:22.236849   58701 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-519831" [05d154e2-69cf-40dc-a4ba-ed0d65be4365] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0410 22:49:22.236861   58701 system_pods.go:61] "kube-proxy-5mbwx" [44724487-9539-4079-9fd6-40cb70208b95] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0410 22:49:22.236866   58701 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-519831" [a587ef20-140c-40b0-b306-d5f5c595f4a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0410 22:49:22.236871   58701 system_pods.go:61] "metrics-server-57f55c9bc5-9l2hc" [2f5cda2f-4d8f-4798-954e-5ef588f2b88f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:49:22.236876   58701 system_pods.go:61] "storage-provisioner" [e4e09f42-54ba-480e-a020-1ca071a54558] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0410 22:49:22.236884   58701 system_pods.go:74] duration metric: took 18.510987ms to wait for pod list to return data ...
	I0410 22:49:22.236893   58701 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:49:22.242143   58701 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:49:22.242167   58701 node_conditions.go:123] node cpu capacity is 2
	I0410 22:49:22.242177   58701 node_conditions.go:105] duration metric: took 5.279415ms to run NodePressure ...
	I0410 22:49:22.242192   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:22.532741   58701 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0410 22:49:22.537418   58701 kubeadm.go:733] kubelet initialised
	I0410 22:49:22.537444   58701 kubeadm.go:734] duration metric: took 4.675489ms waiting for restarted kubelet to initialise ...
	I0410 22:49:22.537453   58701 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:49:22.543364   58701 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-ghnvx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:22.549161   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "coredns-76f75df574-ghnvx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.549186   58701 pod_ready.go:81] duration metric: took 5.796619ms for pod "coredns-76f75df574-ghnvx" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:22.549196   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "coredns-76f75df574-ghnvx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.549207   58701 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:22.554131   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.554156   58701 pod_ready.go:81] duration metric: took 4.941026ms for pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:22.554165   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.554172   58701 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:22.558783   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.558812   58701 pod_ready.go:81] duration metric: took 4.633262ms for pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:22.558822   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.558828   58701 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:22.622314   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.622344   58701 pod_ready.go:81] duration metric: took 63.505681ms for pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:22.622356   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.622370   58701 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5mbwx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:23.022239   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "kube-proxy-5mbwx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.022266   58701 pod_ready.go:81] duration metric: took 399.888837ms for pod "kube-proxy-5mbwx" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:23.022275   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "kube-proxy-5mbwx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.022286   58701 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:23.422213   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.422245   58701 pod_ready.go:81] duration metric: took 399.950443ms for pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:23.422257   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.422270   58701 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:23.823832   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.823858   58701 pod_ready.go:81] duration metric: took 401.581123ms for pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:23.823868   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.823875   58701 pod_ready.go:38] duration metric: took 1.286413141s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:49:23.823889   58701 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0410 22:49:23.840663   58701 ops.go:34] apiserver oom_adj: -16
	I0410 22:49:23.840691   58701 kubeadm.go:591] duration metric: took 9.497479077s to restartPrimaryControlPlane
	I0410 22:49:23.840702   58701 kubeadm.go:393] duration metric: took 9.546582608s to StartCluster
	I0410 22:49:23.840718   58701 settings.go:142] acquiring lock: {Name:mk5dc8e9a07a91433645b19ffba859d70c73be71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:49:23.840795   58701 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:49:23.843350   58701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/kubeconfig: {Name:mkd6f498bec10d7b0d2291f83f5e27766227bc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:49:23.843613   58701 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.170 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0410 22:49:23.845385   58701 out.go:177] * Verifying Kubernetes components...
	I0410 22:49:23.843685   58701 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0410 22:49:23.846686   58701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:49:23.845421   58701 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-519831"
	I0410 22:49:23.846834   58701 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-519831"
	I0410 22:49:23.843826   58701 config.go:182] Loaded profile config "default-k8s-diff-port-519831": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	W0410 22:49:23.846852   58701 addons.go:243] addon storage-provisioner should already be in state true
	I0410 22:49:23.846901   58701 host.go:66] Checking if "default-k8s-diff-port-519831" exists ...
	I0410 22:49:23.845429   58701 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-519831"
	I0410 22:49:23.846969   58701 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-519831"
	I0410 22:49:23.845433   58701 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-519831"
	I0410 22:49:23.847069   58701 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-519831"
	W0410 22:49:23.847088   58701 addons.go:243] addon metrics-server should already be in state true
	I0410 22:49:23.847122   58701 host.go:66] Checking if "default-k8s-diff-port-519831" exists ...
	I0410 22:49:23.847349   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.847358   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.847381   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.847384   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.847495   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.847532   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.863090   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33089
	I0410 22:49:23.863240   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33493
	I0410 22:49:23.863685   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.863793   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.864315   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.864333   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.864356   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.864371   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.864741   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.864749   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.864949   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetState
	I0410 22:49:23.865210   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.865258   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.867599   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41111
	I0410 22:49:23.868035   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.868627   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.868652   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.868739   58701 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-519831"
	W0410 22:49:23.868757   58701 addons.go:243] addon default-storageclass should already be in state true
	I0410 22:49:23.868785   58701 host.go:66] Checking if "default-k8s-diff-port-519831" exists ...
	I0410 22:49:23.869023   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.869094   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.869136   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.869562   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.869630   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.881589   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41925
	I0410 22:49:23.881997   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.882429   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.882442   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.882719   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.882914   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetState
	I0410 22:49:23.884708   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:23.886865   58701 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0410 22:49:23.886946   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37555
	I0410 22:49:23.888493   58701 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0410 22:49:23.888511   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0410 22:49:23.888532   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:23.888850   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.889129   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39287
	I0410 22:49:23.889513   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.889536   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.889601   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.890020   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.890265   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.890285   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.890308   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetState
	I0410 22:49:23.890667   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.891458   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.891496   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.892090   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.892232   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:23.894143   58701 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:20.015689   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:20.016192   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:20.016230   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:20.016163   59513 retry.go:31] will retry after 2.611766259s: waiting for machine to come up
	I0410 22:49:22.629270   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:22.629704   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:22.629731   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:22.629644   59513 retry.go:31] will retry after 3.270808972s: waiting for machine to come up
	I0410 22:49:23.892695   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:23.892720   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:23.895489   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.895599   58701 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:49:23.895609   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0410 22:49:23.895623   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:23.896367   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:23.896558   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:23.896754   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:23.898964   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.899320   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:23.899355   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.899535   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:23.899715   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:23.899855   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:23.899999   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:23.910046   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35713
	I0410 22:49:23.910471   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.911056   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.911077   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.911445   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.911653   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetState
	I0410 22:49:23.913330   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:23.913603   58701 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0410 22:49:23.913619   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0410 22:49:23.913637   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:23.916303   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.916759   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:23.916820   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.916923   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:23.917137   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:23.917377   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:23.917517   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:24.067636   58701 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:49:24.087396   58701 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-519831" to be "Ready" ...
	I0410 22:49:24.204429   58701 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0410 22:49:24.204457   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0410 22:49:24.213319   58701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0410 22:49:24.224083   58701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:49:24.234156   58701 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0410 22:49:24.234182   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0410 22:49:24.273950   58701 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:49:24.273980   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0410 22:49:24.295822   58701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:49:24.580460   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:24.580498   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:24.580835   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:24.580853   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:24.580864   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:24.580872   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:24.580872   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Closing plugin on server side
	I0410 22:49:24.581102   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:24.581126   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:24.589648   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:24.589714   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:24.589981   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Closing plugin on server side
	I0410 22:49:24.590040   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:24.590062   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:25.339438   58701 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.043578779s)
	I0410 22:49:25.339489   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:25.339499   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:25.339451   58701 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.115333809s)
	I0410 22:49:25.339560   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:25.339593   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:25.339872   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:25.339897   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:25.339911   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:25.339924   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:25.339944   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Closing plugin on server side
	I0410 22:49:25.339956   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:25.339984   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:25.340004   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:25.340015   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:25.340149   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:25.340185   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:25.340203   58701 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-519831"
	I0410 22:49:25.341481   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:25.341497   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:25.344575   58701 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0410 22:49:20.585629   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:21.084898   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:21.585346   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:22.085672   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:22.585768   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:23.085613   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:23.585507   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:24.085104   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:24.585745   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:25.084858   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:23.017917   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:25.018591   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:27.019206   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:25.341622   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Closing plugin on server side
	I0410 22:49:25.345974   58701 addons.go:505] duration metric: took 1.502302613s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0410 22:49:26.094458   58701 node_ready.go:53] node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:25.904062   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:25.904580   57270 main.go:141] libmachine: (no-preload-646133) Found IP for machine: 192.168.50.17
	I0410 22:49:25.904608   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has current primary IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:25.904622   57270 main.go:141] libmachine: (no-preload-646133) Reserving static IP address...
	I0410 22:49:25.905076   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "no-preload-646133", mac: "52:54:00:35:62:0e", ip: "192.168.50.17"} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:25.905117   57270 main.go:141] libmachine: (no-preload-646133) DBG | skip adding static IP to network mk-no-preload-646133 - found existing host DHCP lease matching {name: "no-preload-646133", mac: "52:54:00:35:62:0e", ip: "192.168.50.17"}
	I0410 22:49:25.905134   57270 main.go:141] libmachine: (no-preload-646133) Reserved static IP address: 192.168.50.17
	I0410 22:49:25.905151   57270 main.go:141] libmachine: (no-preload-646133) Waiting for SSH to be available...
	I0410 22:49:25.905170   57270 main.go:141] libmachine: (no-preload-646133) DBG | Getting to WaitForSSH function...
	I0410 22:49:25.907397   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:25.907773   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:25.907796   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:25.907937   57270 main.go:141] libmachine: (no-preload-646133) DBG | Using SSH client type: external
	I0410 22:49:25.907960   57270 main.go:141] libmachine: (no-preload-646133) DBG | Using SSH private key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa (-rw-------)
	I0410 22:49:25.907979   57270 main.go:141] libmachine: (no-preload-646133) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.17 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0410 22:49:25.907989   57270 main.go:141] libmachine: (no-preload-646133) DBG | About to run SSH command:
	I0410 22:49:25.907997   57270 main.go:141] libmachine: (no-preload-646133) DBG | exit 0
	I0410 22:49:26.032683   57270 main.go:141] libmachine: (no-preload-646133) DBG | SSH cmd err, output: <nil>: 
	I0410 22:49:26.033065   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetConfigRaw
	I0410 22:49:26.033761   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetIP
	I0410 22:49:26.036545   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.036951   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.036982   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.037187   57270 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/config.json ...
	I0410 22:49:26.037403   57270 machine.go:94] provisionDockerMachine start ...
	I0410 22:49:26.037424   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:26.037655   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.039750   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.040081   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.040102   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.040285   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:26.040486   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.040657   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.040818   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:26.040972   57270 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:26.041180   57270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0410 22:49:26.041197   57270 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 22:49:26.149298   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0410 22:49:26.149335   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetMachineName
	I0410 22:49:26.149618   57270 buildroot.go:166] provisioning hostname "no-preload-646133"
	I0410 22:49:26.149647   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetMachineName
	I0410 22:49:26.149849   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.152432   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.152799   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.152829   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.152973   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:26.153233   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.153406   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.153571   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:26.153774   57270 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:26.153992   57270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0410 22:49:26.154010   57270 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-646133 && echo "no-preload-646133" | sudo tee /etc/hostname
	I0410 22:49:26.283760   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-646133
	
	I0410 22:49:26.283794   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.286605   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.286925   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.286955   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.287097   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:26.287277   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.287425   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.287551   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:26.287725   57270 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:26.287944   57270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0410 22:49:26.287969   57270 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-646133' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-646133/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-646133' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:49:26.402869   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:49:26.402905   57270 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:49:26.402945   57270 buildroot.go:174] setting up certificates
	I0410 22:49:26.402956   57270 provision.go:84] configureAuth start
	I0410 22:49:26.402973   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetMachineName
	I0410 22:49:26.403234   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetIP
	I0410 22:49:26.405718   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.406079   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.406119   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.406357   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.408549   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.408882   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.408917   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.409034   57270 provision.go:143] copyHostCerts
	I0410 22:49:26.409106   57270 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:49:26.409124   57270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:49:26.409177   57270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:49:26.409310   57270 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:49:26.409320   57270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:49:26.409341   57270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:49:26.409405   57270 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:49:26.409412   57270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:49:26.409430   57270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:49:26.409476   57270 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.no-preload-646133 san=[127.0.0.1 192.168.50.17 localhost minikube no-preload-646133]
	I0410 22:49:26.567556   57270 provision.go:177] copyRemoteCerts
	I0410 22:49:26.567611   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:49:26.567647   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.570205   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.570589   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.570614   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.570805   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:26.571034   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.571172   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:26.571294   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:49:26.655943   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:49:26.681691   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0410 22:49:26.706573   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0410 22:49:26.733054   57270 provision.go:87] duration metric: took 330.073783ms to configureAuth
	I0410 22:49:26.733088   57270 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:49:26.733276   57270 config.go:182] Loaded profile config "no-preload-646133": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.1
	I0410 22:49:26.733347   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.735910   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.736264   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.736295   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.736474   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:26.736648   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.736798   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.736925   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:26.737055   57270 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:26.737225   57270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0410 22:49:26.737241   57270 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:49:27.008174   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:49:27.008202   57270 machine.go:97] duration metric: took 970.785508ms to provisionDockerMachine
	I0410 22:49:27.008216   57270 start.go:293] postStartSetup for "no-preload-646133" (driver="kvm2")
	I0410 22:49:27.008236   57270 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:49:27.008263   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:27.008554   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:49:27.008580   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:27.011150   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.011561   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:27.011604   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.011900   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:27.012090   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:27.012274   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:27.012432   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:49:27.105247   57270 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:49:27.109842   57270 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:49:27.109868   57270 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:49:27.109927   57270 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:49:27.109993   57270 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:49:27.110080   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:49:27.121451   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:49:27.151797   57270 start.go:296] duration metric: took 143.569287ms for postStartSetup
	I0410 22:49:27.151836   57270 fix.go:56] duration metric: took 19.642403615s for fixHost
	I0410 22:49:27.151865   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:27.154454   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.154869   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:27.154903   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.154987   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:27.155193   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:27.155357   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:27.155512   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:27.155660   57270 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:27.155862   57270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0410 22:49:27.155875   57270 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 22:49:27.265609   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712789367.209761579
	
	I0410 22:49:27.265652   57270 fix.go:216] guest clock: 1712789367.209761579
	I0410 22:49:27.265662   57270 fix.go:229] Guest: 2024-04-10 22:49:27.209761579 +0000 UTC Remote: 2024-04-10 22:49:27.151840464 +0000 UTC m=+377.371052419 (delta=57.921115ms)
	I0410 22:49:27.265687   57270 fix.go:200] guest clock delta is within tolerance: 57.921115ms
	I0410 22:49:27.265697   57270 start.go:83] releasing machines lock for "no-preload-646133", held for 19.756293566s
	I0410 22:49:27.265724   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:27.265960   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetIP
	I0410 22:49:27.268735   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.269184   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:27.269216   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.269380   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:27.270014   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:27.270233   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:27.270331   57270 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:49:27.270376   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:27.270645   57270 ssh_runner.go:195] Run: cat /version.json
	I0410 22:49:27.270669   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:27.273542   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.273846   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.273986   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:27.274019   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.274140   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:27.274230   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:27.274259   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.274318   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:27.274400   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:27.274531   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:27.274536   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:27.274688   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:27.274723   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:49:27.274806   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:49:27.359922   57270 ssh_runner.go:195] Run: systemctl --version
	I0410 22:49:27.400885   57270 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:49:27.555260   57270 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 22:49:27.561275   57270 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:49:27.561333   57270 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:49:27.578478   57270 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0410 22:49:27.578502   57270 start.go:494] detecting cgroup driver to use...
	I0410 22:49:27.578567   57270 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:49:27.598020   57270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:49:27.613068   57270 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:49:27.613140   57270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:49:27.629253   57270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:49:27.644130   57270 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:49:27.791801   57270 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:49:27.952366   57270 docker.go:233] disabling docker service ...
	I0410 22:49:27.952477   57270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:49:27.968629   57270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:49:27.982330   57270 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:49:28.117396   57270 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:49:28.240808   57270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:49:28.257299   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:49:28.280918   57270 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0410 22:49:28.280991   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.296415   57270 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:49:28.296480   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.308602   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.319535   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.329812   57270 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:49:28.341466   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.354706   57270 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.374405   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.385094   57270 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:49:28.394412   57270 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0410 22:49:28.394466   57270 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0410 22:49:28.407654   57270 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:49:28.418381   57270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:49:28.525783   57270 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:49:28.678643   57270 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 22:49:28.678706   57270 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 22:49:28.683681   57270 start.go:562] Will wait 60s for crictl version
	I0410 22:49:28.683737   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:28.687703   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 22:49:28.725311   57270 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 22:49:28.725414   57270 ssh_runner.go:195] Run: crio --version
	I0410 22:49:28.755393   57270 ssh_runner.go:195] Run: crio --version
	I0410 22:49:28.788963   57270 out.go:177] * Preparing Kubernetes v1.30.0-rc.1 on CRI-O 1.29.1 ...
	I0410 22:49:28.790274   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetIP
	I0410 22:49:28.793091   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:28.793418   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:28.793452   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:28.793659   57270 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0410 22:49:28.798916   57270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:49:28.814575   57270 kubeadm.go:877] updating cluster {Name:no-preload-646133 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.1 ClusterName:no-preload-646133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.17 Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 22:49:28.814689   57270 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.1 and runtime crio
	I0410 22:49:28.814717   57270 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:49:28.852604   57270 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.1". assuming images are not preloaded.
	I0410 22:49:28.852627   57270 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-rc.1 registry.k8s.io/kube-controller-manager:v1.30.0-rc.1 registry.k8s.io/kube-scheduler:v1.30.0-rc.1 registry.k8s.io/kube-proxy:v1.30.0-rc.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0410 22:49:28.852698   57270 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:28.852707   57270 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.1
	I0410 22:49:28.852733   57270 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0410 22:49:28.852756   57270 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0410 22:49:28.852803   57270 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.1
	I0410 22:49:28.852870   57270 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.1
	I0410 22:49:28.852890   57270 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.1
	I0410 22:49:28.852917   57270 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0410 22:49:28.854348   57270 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.1
	I0410 22:49:28.854354   57270 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.1
	I0410 22:49:28.854378   57270 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0410 22:49:28.854419   57270 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0410 22:49:28.854421   57270 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.1
	I0410 22:49:28.854355   57270 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.1
	I0410 22:49:28.854353   57270 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:28.854740   57270 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0410 22:49:29.066608   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0410 22:49:29.072486   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-rc.1
	I0410 22:49:29.073347   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0410 22:49:29.075270   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0410 22:49:29.082649   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-rc.1
	I0410 22:49:29.085737   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-rc.1
	I0410 22:49:29.093699   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-rc.1
	I0410 22:49:29.290780   57270 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-rc.1" does not exist at hash "ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b" in container runtime
	I0410 22:49:29.290810   57270 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0410 22:49:29.290839   57270 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0410 22:49:29.290837   57270 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-rc.1
	I0410 22:49:29.290849   57270 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0410 22:49:29.290871   57270 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0410 22:49:29.290882   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.290902   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.290882   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.304346   57270 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-rc.1" does not exist at hash "69f89e5e13a4119f8752cd7f0fb30e3db4ce480a5e08580a3bf72597464bd061" in container runtime
	I0410 22:49:29.304409   57270 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-rc.1
	I0410 22:49:29.304459   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.304510   57270 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-rc.1" does not exist at hash "bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895" in container runtime
	I0410 22:49:29.304599   57270 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-rc.1
	I0410 22:49:29.304635   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.304563   57270 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-rc.1" does not exist at hash "577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090" in container runtime
	I0410 22:49:29.304689   57270 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.1
	I0410 22:49:29.304738   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.311219   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0410 22:49:29.311264   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.1
	I0410 22:49:29.311311   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0410 22:49:29.324663   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-rc.1
	I0410 22:49:29.324770   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.1
	I0410 22:49:29.324855   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-rc.1
	I0410 22:49:29.442426   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0410 22:49:29.442541   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0410 22:49:29.458416   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0410 22:49:29.458526   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0410 22:49:29.468890   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.1
	I0410 22:49:29.468998   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.1
	I0410 22:49:29.481365   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.1
	I0410 22:49:29.481482   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.1
	I0410 22:49:29.498862   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.1
	I0410 22:49:29.498899   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0410 22:49:29.498913   57270 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0410 22:49:29.498927   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.1
	I0410 22:49:29.498951   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.1 (exists)
	I0410 22:49:29.498957   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0410 22:49:29.498964   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.1
	I0410 22:49:29.498982   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-rc.1 (exists)
	I0410 22:49:29.499012   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.1
	I0410 22:49:29.498926   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0410 22:49:29.507249   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.1 (exists)
	I0410 22:49:29.507282   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.1 (exists)
	I0410 22:49:29.751612   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:25.585095   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:26.085119   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:26.585846   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:27.084920   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:27.585251   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:28.084926   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:28.585643   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:29.084937   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:29.585666   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:30.085088   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:29.518476   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:31.518837   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:28.592323   58701 node_ready.go:53] node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:31.098027   58701 node_ready.go:53] node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:31.591789   58701 node_ready.go:49] node "default-k8s-diff-port-519831" has status "Ready":"True"
	I0410 22:49:31.591822   58701 node_ready.go:38] duration metric: took 7.504383585s for node "default-k8s-diff-port-519831" to be "Ready" ...
	I0410 22:49:31.591835   58701 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:49:31.599103   58701 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-ghnvx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:31.607758   58701 pod_ready.go:92] pod "coredns-76f75df574-ghnvx" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:31.607787   58701 pod_ready.go:81] duration metric: took 8.655521ms for pod "coredns-76f75df574-ghnvx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:31.607801   58701 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:33.690936   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.191950196s)
	I0410 22:49:33.690965   57270 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.939318786s)
	I0410 22:49:33.691014   57270 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0410 22:49:33.691045   57270 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:33.690973   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0410 22:49:33.691091   57270 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.1
	I0410 22:49:33.691101   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:33.691163   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.1
	I0410 22:49:33.695868   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:30.585515   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:31.085273   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:31.585347   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:32.084892   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:32.585361   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:33.085648   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:33.585256   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:34.084938   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:34.585005   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:35.085466   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:34.018733   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:36.019904   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:33.615785   58701 pod_ready.go:102] pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:35.115811   58701 pod_ready.go:92] pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:35.115846   58701 pod_ready.go:81] duration metric: took 3.508038321s for pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:35.115856   58701 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.123593   58701 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:37.123624   58701 pod_ready.go:81] duration metric: took 2.007760022s for pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.123638   58701 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.130390   58701 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:37.130421   58701 pod_ready.go:81] duration metric: took 6.771239ms for pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.130436   58701 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5mbwx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.136219   58701 pod_ready.go:92] pod "kube-proxy-5mbwx" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:37.136253   58701 pod_ready.go:81] duration metric: took 5.809077ms for pod "kube-proxy-5mbwx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.136265   58701 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.142909   58701 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:37.142939   58701 pod_ready.go:81] duration metric: took 6.664922ms for pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.142954   58701 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:35.767190   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.1: (2.075997626s)
	I0410 22:49:35.767227   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.1 from cache
	I0410 22:49:35.767261   57270 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-rc.1
	I0410 22:49:35.767278   57270 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.071386498s)
	I0410 22:49:35.767326   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.1
	I0410 22:49:35.767327   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0410 22:49:35.767497   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0410 22:49:35.773679   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0410 22:49:37.666289   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.1: (1.898906389s)
	I0410 22:49:37.666326   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.1 from cache
	I0410 22:49:37.666358   57270 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0410 22:49:37.666422   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0410 22:49:39.652778   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.986322091s)
	I0410 22:49:39.652820   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0410 22:49:39.652855   57270 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.1
	I0410 22:49:39.652951   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.1
	I0410 22:49:35.585228   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:36.085699   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:36.585690   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:37.085760   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:37.584867   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:37.584947   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:37.625964   57719 cri.go:89] found id: ""
	I0410 22:49:37.625989   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.625996   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:37.626001   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:37.626046   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:37.669151   57719 cri.go:89] found id: ""
	I0410 22:49:37.669178   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.669188   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:37.669194   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:37.669242   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:37.711426   57719 cri.go:89] found id: ""
	I0410 22:49:37.711456   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.711466   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:37.711474   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:37.711538   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:37.754678   57719 cri.go:89] found id: ""
	I0410 22:49:37.754707   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.754719   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:37.754726   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:37.754809   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:37.795259   57719 cri.go:89] found id: ""
	I0410 22:49:37.795291   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.795301   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:37.795307   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:37.795375   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:37.836961   57719 cri.go:89] found id: ""
	I0410 22:49:37.836994   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.837004   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:37.837011   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:37.837075   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:37.876195   57719 cri.go:89] found id: ""
	I0410 22:49:37.876223   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.876233   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:37.876239   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:37.876290   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:37.911688   57719 cri.go:89] found id: ""
	I0410 22:49:37.911715   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.911725   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:37.911736   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:37.911751   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:37.954690   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:37.954734   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:38.006731   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:38.006771   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:38.024290   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:38.024314   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:38.148504   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:38.148529   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:38.148561   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:38.519483   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:40.520822   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:39.150543   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:41.151300   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:42.217749   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.1: (2.564772479s)
	I0410 22:49:42.217778   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.1 from cache
	I0410 22:49:42.217802   57270 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.1
	I0410 22:49:42.217843   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.1
	I0410 22:49:44.577826   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.1: (2.359955682s)
	I0410 22:49:44.577865   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.1 from cache
	I0410 22:49:44.577892   57270 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0410 22:49:44.577940   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0410 22:49:40.726314   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:40.743098   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:40.743168   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:40.794673   57719 cri.go:89] found id: ""
	I0410 22:49:40.794697   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.794704   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:40.794710   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:40.794756   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:40.836274   57719 cri.go:89] found id: ""
	I0410 22:49:40.836308   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.836319   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:40.836327   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:40.836408   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:40.882249   57719 cri.go:89] found id: ""
	I0410 22:49:40.882276   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.882285   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:40.882292   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:40.882357   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:40.925829   57719 cri.go:89] found id: ""
	I0410 22:49:40.925867   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.925878   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:40.925885   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:40.925936   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:40.978494   57719 cri.go:89] found id: ""
	I0410 22:49:40.978529   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.978540   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:40.978547   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:40.978611   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:41.020935   57719 cri.go:89] found id: ""
	I0410 22:49:41.020964   57719 logs.go:276] 0 containers: []
	W0410 22:49:41.020975   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:41.020982   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:41.021040   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:41.060779   57719 cri.go:89] found id: ""
	I0410 22:49:41.060812   57719 logs.go:276] 0 containers: []
	W0410 22:49:41.060824   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:41.060831   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:41.060885   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:41.119604   57719 cri.go:89] found id: ""
	I0410 22:49:41.119632   57719 logs.go:276] 0 containers: []
	W0410 22:49:41.119643   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:41.119653   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:41.119667   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:41.188739   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:41.188774   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:41.203682   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:41.203735   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:41.293423   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:41.293451   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:41.293468   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:41.366606   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:41.366649   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:43.914447   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:43.930350   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:43.930439   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:43.968867   57719 cri.go:89] found id: ""
	I0410 22:49:43.968921   57719 logs.go:276] 0 containers: []
	W0410 22:49:43.968932   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:43.968939   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:43.969012   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:44.010143   57719 cri.go:89] found id: ""
	I0410 22:49:44.010169   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.010181   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:44.010188   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:44.010264   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:44.048610   57719 cri.go:89] found id: ""
	I0410 22:49:44.048637   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.048645   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:44.048651   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:44.048697   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:44.105939   57719 cri.go:89] found id: ""
	I0410 22:49:44.105973   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.106001   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:44.106009   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:44.106086   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:44.149699   57719 cri.go:89] found id: ""
	I0410 22:49:44.149726   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.149735   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:44.149743   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:44.149803   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:44.193131   57719 cri.go:89] found id: ""
	I0410 22:49:44.193159   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.193167   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:44.193173   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:44.193255   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:44.233751   57719 cri.go:89] found id: ""
	I0410 22:49:44.233781   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.233789   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:44.233801   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:44.233868   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:44.284404   57719 cri.go:89] found id: ""
	I0410 22:49:44.284432   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.284441   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:44.284449   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:44.284461   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:44.330082   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:44.330118   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:44.383452   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:44.383487   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:44.399604   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:44.399632   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:44.476328   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:44.476368   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:44.476415   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:43.019922   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:45.519954   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:43.650596   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:45.651668   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:45.537183   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0410 22:49:45.537228   57270 cache_images.go:123] Successfully loaded all cached images
	I0410 22:49:45.537235   57270 cache_images.go:92] duration metric: took 16.68459637s to LoadCachedImages
	I0410 22:49:45.537249   57270 kubeadm.go:928] updating node { 192.168.50.17 8443 v1.30.0-rc.1 crio true true} ...
	I0410 22:49:45.537401   57270 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-646133 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.1 ClusterName:no-preload-646133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 22:49:45.537476   57270 ssh_runner.go:195] Run: crio config
	I0410 22:49:45.587002   57270 cni.go:84] Creating CNI manager for ""
	I0410 22:49:45.587031   57270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:49:45.587047   57270 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 22:49:45.587069   57270 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.17 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-646133 NodeName:no-preload-646133 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.17"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.17 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0410 22:49:45.587205   57270 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.17
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-646133"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.17
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.17"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 22:49:45.587272   57270 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.1
	I0410 22:49:45.600694   57270 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 22:49:45.600758   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 22:49:45.613884   57270 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0410 22:49:45.633871   57270 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0410 22:49:45.654733   57270 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0410 22:49:45.673976   57270 ssh_runner.go:195] Run: grep 192.168.50.17	control-plane.minikube.internal$ /etc/hosts
	I0410 22:49:45.678260   57270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.17	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:49:45.693499   57270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:49:45.819034   57270 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:49:45.838775   57270 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133 for IP: 192.168.50.17
	I0410 22:49:45.838799   57270 certs.go:194] generating shared ca certs ...
	I0410 22:49:45.838819   57270 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:49:45.839010   57270 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 22:49:45.839064   57270 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 22:49:45.839078   57270 certs.go:256] generating profile certs ...
	I0410 22:49:45.839175   57270 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/client.key
	I0410 22:49:45.839256   57270 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/apiserver.key.d257fb06
	I0410 22:49:45.839310   57270 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/proxy-client.key
	I0410 22:49:45.839480   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 22:49:45.839521   57270 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 22:49:45.839531   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 22:49:45.839551   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 22:49:45.839608   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 22:49:45.839633   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 22:49:45.839674   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:49:45.840315   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 22:49:45.897688   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 22:49:45.932242   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 22:49:45.979537   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 22:49:46.020562   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0410 22:49:46.057254   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0410 22:49:46.084070   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 22:49:46.112807   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0410 22:49:46.141650   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 22:49:46.170167   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 22:49:46.196917   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 22:49:46.222645   57270 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 22:49:46.242626   57270 ssh_runner.go:195] Run: openssl version
	I0410 22:49:46.249048   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 22:49:46.265110   57270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:46.270018   57270 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:46.270083   57270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:46.276298   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 22:49:46.288165   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 22:49:46.299040   57270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 22:49:46.303584   57270 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:49:46.303627   57270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 22:49:46.309278   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 22:49:46.319990   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 22:49:46.331654   57270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 22:49:46.336700   57270 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:49:46.336750   57270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 22:49:46.342767   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 22:49:46.355005   57270 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:49:46.359870   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0410 22:49:46.366270   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0410 22:49:46.372625   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0410 22:49:46.379270   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0410 22:49:46.386312   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0410 22:49:46.392796   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0410 22:49:46.399209   57270 kubeadm.go:391] StartCluster: {Name:no-preload-646133 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.1 ClusterName:no-preload-646133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.17 Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:49:46.399318   57270 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 22:49:46.399405   57270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:49:46.439061   57270 cri.go:89] found id: ""
	I0410 22:49:46.439149   57270 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0410 22:49:46.450243   57270 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0410 22:49:46.450265   57270 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0410 22:49:46.450271   57270 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0410 22:49:46.450323   57270 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0410 22:49:46.460553   57270 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0410 22:49:46.461608   57270 kubeconfig.go:125] found "no-preload-646133" server: "https://192.168.50.17:8443"
	I0410 22:49:46.464469   57270 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0410 22:49:46.474775   57270 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.17
	I0410 22:49:46.474808   57270 kubeadm.go:1154] stopping kube-system containers ...
	I0410 22:49:46.474820   57270 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0410 22:49:46.474860   57270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:49:46.514933   57270 cri.go:89] found id: ""
	I0410 22:49:46.515010   57270 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0410 22:49:46.533830   57270 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:49:46.547026   57270 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:49:46.547042   57270 kubeadm.go:156] found existing configuration files:
	
	I0410 22:49:46.547081   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:49:46.557093   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:49:46.557157   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:49:46.567102   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:49:46.576939   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:49:46.576998   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:49:46.586921   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:49:46.596189   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:49:46.596260   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:49:46.607803   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:49:46.618166   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:49:46.618240   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:49:46.628406   57270 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:49:46.638748   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:46.767824   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:48.028868   57270 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.261006059s)
	I0410 22:49:48.028907   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:48.253185   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:48.323164   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:48.404069   57270 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:49:48.404153   57270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:48.904557   57270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:49.404477   57270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:49.437891   57270 api_server.go:72] duration metric: took 1.033818826s to wait for apiserver process to appear ...
	I0410 22:49:49.437927   57270 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:49:49.437953   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:49.438623   57270 api_server.go:269] stopped: https://192.168.50.17:8443/healthz: Get "https://192.168.50.17:8443/healthz": dial tcp 192.168.50.17:8443: connect: connection refused
	I0410 22:49:47.054122   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:47.069583   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:47.069654   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:47.113953   57719 cri.go:89] found id: ""
	I0410 22:49:47.113981   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.113989   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:47.113995   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:47.114054   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:47.156770   57719 cri.go:89] found id: ""
	I0410 22:49:47.156798   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.156808   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:47.156814   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:47.156891   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:47.195227   57719 cri.go:89] found id: ""
	I0410 22:49:47.195252   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.195261   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:47.195266   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:47.195328   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:47.238109   57719 cri.go:89] found id: ""
	I0410 22:49:47.238138   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.238150   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:47.238157   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:47.238212   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:47.285062   57719 cri.go:89] found id: ""
	I0410 22:49:47.285093   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.285101   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:47.285108   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:47.285185   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:47.324635   57719 cri.go:89] found id: ""
	I0410 22:49:47.324663   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.324670   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:47.324676   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:47.324744   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:47.365404   57719 cri.go:89] found id: ""
	I0410 22:49:47.365437   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.365445   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:47.365468   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:47.365535   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:47.412296   57719 cri.go:89] found id: ""
	I0410 22:49:47.412335   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.412346   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:47.412367   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:47.412384   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:47.497998   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:47.498019   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:47.498033   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:47.590502   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:47.590536   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:47.647665   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:47.647692   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:47.697704   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:47.697741   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:50.213410   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:50.229408   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:50.229488   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:50.268514   57719 cri.go:89] found id: ""
	I0410 22:49:50.268545   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.268556   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:50.268563   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:50.268620   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:50.308733   57719 cri.go:89] found id: ""
	I0410 22:49:50.308762   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.308790   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:50.308796   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:50.308857   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:50.353929   57719 cri.go:89] found id: ""
	I0410 22:49:50.353966   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.353977   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:50.353985   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:50.354043   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:50.397979   57719 cri.go:89] found id: ""
	I0410 22:49:50.398009   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.398019   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:50.398026   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:50.398086   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:47.521284   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:50.018571   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:52.020874   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:48.151768   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:50.151820   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:49.939075   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:52.355813   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0410 22:49:52.355855   57270 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0410 22:49:52.355868   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:52.502702   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0410 22:49:52.502733   57270 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0410 22:49:52.502796   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:52.509360   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0410 22:49:52.509401   57270 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0410 22:49:52.939056   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:52.946114   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0410 22:49:52.946154   57270 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0410 22:49:53.438741   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:53.444154   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0410 22:49:53.444187   57270 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0410 22:49:53.938848   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:53.947578   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 200:
	ok
	I0410 22:49:53.956247   57270 api_server.go:141] control plane version: v1.30.0-rc.1
	I0410 22:49:53.956281   57270 api_server.go:131] duration metric: took 4.518344859s to wait for apiserver health ...
	I0410 22:49:53.956292   57270 cni.go:84] Creating CNI manager for ""
	I0410 22:49:53.956301   57270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:49:53.958053   57270 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0410 22:49:53.959420   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0410 22:49:53.973242   57270 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0410 22:49:54.004623   57270 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:49:54.024138   57270 system_pods.go:59] 8 kube-system pods found
	I0410 22:49:54.024185   57270 system_pods.go:61] "coredns-7db6d8ff4d-lbcp6" [1ff36529-d718-41e7-9b61-54ba32efab0e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0410 22:49:54.024195   57270 system_pods.go:61] "etcd-no-preload-646133" [a704a953-1418-4425-8ac1-272c632050c7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0410 22:49:54.024214   57270 system_pods.go:61] "kube-apiserver-no-preload-646133" [90d4ff18-767c-4dbf-b4ad-ff02cb3d542f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0410 22:49:54.024231   57270 system_pods.go:61] "kube-controller-manager-no-preload-646133" [82c0778e-690f-41a6-a57f-017ab79fd029] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0410 22:49:54.024243   57270 system_pods.go:61] "kube-proxy-v5fbl" [002efd18-4375-455b-9b4a-15bb739120e0] Running
	I0410 22:49:54.024252   57270 system_pods.go:61] "kube-scheduler-no-preload-646133" [fa9898bc-36a6-4cc4-91e6-bba4ccd22d9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0410 22:49:54.024264   57270 system_pods.go:61] "metrics-server-569cc877fc-pw276" [22de5c2f-13ab-4f69-8eb6-ec4a3c3d1e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:49:54.024277   57270 system_pods.go:61] "storage-provisioner" [1028921e-3924-4614-bcb6-f949c18e9e4e] Running
	I0410 22:49:54.024287   57270 system_pods.go:74] duration metric: took 19.638409ms to wait for pod list to return data ...
	I0410 22:49:54.024301   57270 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:49:54.031666   57270 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:49:54.031694   57270 node_conditions.go:123] node cpu capacity is 2
	I0410 22:49:54.031705   57270 node_conditions.go:105] duration metric: took 7.394201ms to run NodePressure ...
	I0410 22:49:54.031720   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:54.339352   57270 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0410 22:49:54.345115   57270 kubeadm.go:733] kubelet initialised
	I0410 22:49:54.345146   57270 kubeadm.go:734] duration metric: took 5.76519ms waiting for restarted kubelet to initialise ...
	I0410 22:49:54.345156   57270 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:49:54.352254   57270 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-lbcp6" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:50.436191   57719 cri.go:89] found id: ""
	I0410 22:49:50.436222   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.436234   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:50.436241   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:50.436316   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:50.476462   57719 cri.go:89] found id: ""
	I0410 22:49:50.476486   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.476494   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:50.476499   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:50.476557   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:50.520025   57719 cri.go:89] found id: ""
	I0410 22:49:50.520054   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.520063   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:50.520071   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:50.520127   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:50.564535   57719 cri.go:89] found id: ""
	I0410 22:49:50.564570   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.564581   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:50.564593   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:50.564624   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:50.620587   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:50.620629   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:50.634802   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:50.634832   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:50.707625   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:50.707655   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:50.707671   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:50.791935   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:50.791970   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:53.339109   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:53.361555   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:53.361632   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:53.428170   57719 cri.go:89] found id: ""
	I0410 22:49:53.428202   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.428212   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:53.428219   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:53.428281   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:53.501929   57719 cri.go:89] found id: ""
	I0410 22:49:53.501957   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.501968   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:53.501977   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:53.502055   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:53.548844   57719 cri.go:89] found id: ""
	I0410 22:49:53.548871   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.548890   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:53.548897   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:53.548949   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:53.595056   57719 cri.go:89] found id: ""
	I0410 22:49:53.595081   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.595090   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:53.595098   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:53.595153   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:53.638885   57719 cri.go:89] found id: ""
	I0410 22:49:53.638920   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.638938   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:53.638946   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:53.639046   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:53.685526   57719 cri.go:89] found id: ""
	I0410 22:49:53.685565   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.685573   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:53.685579   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:53.685650   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:53.725084   57719 cri.go:89] found id: ""
	I0410 22:49:53.725112   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.725119   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:53.725125   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:53.725172   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:53.767031   57719 cri.go:89] found id: ""
	I0410 22:49:53.767062   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.767072   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:53.767083   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:53.767103   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:53.826570   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:53.826618   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:53.843784   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:53.843822   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:53.926277   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:53.926299   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:53.926317   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:54.024735   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:54.024782   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:54.519305   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:56.520139   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:52.651382   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:55.149798   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:57.150803   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:56.359479   57270 pod_ready.go:102] pod "coredns-7db6d8ff4d-lbcp6" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:58.859341   57270 pod_ready.go:102] pod "coredns-7db6d8ff4d-lbcp6" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:56.586265   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:56.602113   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:56.602200   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:56.647041   57719 cri.go:89] found id: ""
	I0410 22:49:56.647074   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.647086   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:56.647094   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:56.647168   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:56.688053   57719 cri.go:89] found id: ""
	I0410 22:49:56.688086   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.688096   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:56.688104   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:56.688190   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:56.729176   57719 cri.go:89] found id: ""
	I0410 22:49:56.729210   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.729221   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:56.729229   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:56.729293   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:56.768877   57719 cri.go:89] found id: ""
	I0410 22:49:56.768905   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.768913   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:56.768919   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:56.768966   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:56.807228   57719 cri.go:89] found id: ""
	I0410 22:49:56.807274   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.807286   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:56.807294   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:56.807361   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:56.848183   57719 cri.go:89] found id: ""
	I0410 22:49:56.848216   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.848224   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:56.848230   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:56.848284   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:56.887894   57719 cri.go:89] found id: ""
	I0410 22:49:56.887923   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.887931   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:56.887937   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:56.887993   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:56.926908   57719 cri.go:89] found id: ""
	I0410 22:49:56.926935   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.926944   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:56.926952   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:56.926968   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:57.012614   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:57.012640   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:57.012657   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:57.098735   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:57.098784   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:57.140798   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:57.140831   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:57.204239   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:57.204283   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:59.720328   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:59.735964   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:59.736042   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:59.774351   57719 cri.go:89] found id: ""
	I0410 22:49:59.774383   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.774393   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:59.774407   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:59.774468   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:59.817222   57719 cri.go:89] found id: ""
	I0410 22:49:59.817248   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.817255   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:59.817260   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:59.817310   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:59.854551   57719 cri.go:89] found id: ""
	I0410 22:49:59.854582   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.854594   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:59.854602   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:59.854656   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:59.894334   57719 cri.go:89] found id: ""
	I0410 22:49:59.894367   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.894375   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:59.894381   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:59.894442   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:59.932446   57719 cri.go:89] found id: ""
	I0410 22:49:59.932472   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.932482   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:59.932489   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:59.932552   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:59.969168   57719 cri.go:89] found id: ""
	I0410 22:49:59.969193   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.969201   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:59.969209   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:59.969273   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:00.006918   57719 cri.go:89] found id: ""
	I0410 22:50:00.006960   57719 logs.go:276] 0 containers: []
	W0410 22:50:00.006972   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:00.006979   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:00.007036   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:00.050380   57719 cri.go:89] found id: ""
	I0410 22:50:00.050411   57719 logs.go:276] 0 containers: []
	W0410 22:50:00.050424   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:00.050433   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:00.050454   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:00.066340   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:00.066366   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:00.146454   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:00.146479   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:00.146494   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:00.231174   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:00.231225   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:00.278732   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:00.278759   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:59.020938   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:01.518584   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:59.151137   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:01.650307   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:01.359992   57270 pod_ready.go:92] pod "coredns-7db6d8ff4d-lbcp6" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:01.360021   57270 pod_ready.go:81] duration metric: took 7.007734788s for pod "coredns-7db6d8ff4d-lbcp6" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:01.360035   57270 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:02.867322   57270 pod_ready.go:92] pod "etcd-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:02.867349   57270 pod_ready.go:81] duration metric: took 1.507305949s for pod "etcd-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:02.867362   57270 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:02.833035   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:02.847316   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:02.847380   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:02.888793   57719 cri.go:89] found id: ""
	I0410 22:50:02.888821   57719 logs.go:276] 0 containers: []
	W0410 22:50:02.888832   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:02.888840   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:02.888897   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:02.926495   57719 cri.go:89] found id: ""
	I0410 22:50:02.926525   57719 logs.go:276] 0 containers: []
	W0410 22:50:02.926535   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:02.926542   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:02.926603   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:02.966185   57719 cri.go:89] found id: ""
	I0410 22:50:02.966217   57719 logs.go:276] 0 containers: []
	W0410 22:50:02.966227   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:02.966233   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:02.966295   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:03.007383   57719 cri.go:89] found id: ""
	I0410 22:50:03.007408   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.007414   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:03.007420   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:03.007490   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:03.044245   57719 cri.go:89] found id: ""
	I0410 22:50:03.044273   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.044281   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:03.044292   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:03.044367   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:03.078820   57719 cri.go:89] found id: ""
	I0410 22:50:03.078849   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.078859   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:03.078866   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:03.078927   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:03.117205   57719 cri.go:89] found id: ""
	I0410 22:50:03.117233   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.117244   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:03.117251   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:03.117313   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:03.155698   57719 cri.go:89] found id: ""
	I0410 22:50:03.155725   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.155735   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:03.155743   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:03.155758   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:03.231685   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:03.231712   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:03.231724   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:03.315122   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:03.315167   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:03.361151   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:03.361186   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:03.412134   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:03.412168   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:04.017523   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:06.024382   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:04.150291   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:06.151488   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:04.873656   57270 pod_ready.go:102] pod "kube-apiserver-no-preload-646133" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:05.874079   57270 pod_ready.go:92] pod "kube-apiserver-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:05.874106   57270 pod_ready.go:81] duration metric: took 3.006735064s for pod "kube-apiserver-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:05.874116   57270 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:07.880447   57270 pod_ready.go:102] pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:08.881209   57270 pod_ready.go:92] pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:08.881241   57270 pod_ready.go:81] duration metric: took 3.007117254s for pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:08.881271   57270 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v5fbl" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:08.887939   57270 pod_ready.go:92] pod "kube-proxy-v5fbl" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:08.887963   57270 pod_ready.go:81] duration metric: took 6.68304ms for pod "kube-proxy-v5fbl" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:08.887975   57270 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:08.894389   57270 pod_ready.go:92] pod "kube-scheduler-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:08.894415   57270 pod_ready.go:81] duration metric: took 6.43215ms for pod "kube-scheduler-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:08.894428   57270 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:05.928116   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:05.942237   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:05.942337   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:05.983813   57719 cri.go:89] found id: ""
	I0410 22:50:05.983842   57719 logs.go:276] 0 containers: []
	W0410 22:50:05.983853   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:05.983861   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:05.983945   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:06.024590   57719 cri.go:89] found id: ""
	I0410 22:50:06.024618   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.024626   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:06.024637   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:06.024698   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:06.063040   57719 cri.go:89] found id: ""
	I0410 22:50:06.063075   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.063087   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:06.063094   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:06.063160   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:06.102224   57719 cri.go:89] found id: ""
	I0410 22:50:06.102250   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.102259   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:06.102273   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:06.102342   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:06.144202   57719 cri.go:89] found id: ""
	I0410 22:50:06.144229   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.144236   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:06.144242   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:06.144288   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:06.189215   57719 cri.go:89] found id: ""
	I0410 22:50:06.189243   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.189250   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:06.189256   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:06.189308   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:06.225218   57719 cri.go:89] found id: ""
	I0410 22:50:06.225247   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.225258   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:06.225266   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:06.225330   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:06.265229   57719 cri.go:89] found id: ""
	I0410 22:50:06.265262   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.265273   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:06.265283   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:06.265306   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:06.279794   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:06.279825   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:06.348038   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:06.348063   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:06.348079   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:06.431293   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:06.431339   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:06.476033   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:06.476060   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:09.032099   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:09.046628   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:09.046765   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:09.086900   57719 cri.go:89] found id: ""
	I0410 22:50:09.086928   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.086936   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:09.086942   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:09.086998   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:09.124989   57719 cri.go:89] found id: ""
	I0410 22:50:09.125018   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.125028   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:09.125035   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:09.125096   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:09.163720   57719 cri.go:89] found id: ""
	I0410 22:50:09.163749   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.163761   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:09.163769   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:09.163822   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:09.203846   57719 cri.go:89] found id: ""
	I0410 22:50:09.203875   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.203883   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:09.203888   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:09.203945   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:09.242974   57719 cri.go:89] found id: ""
	I0410 22:50:09.243002   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.243016   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:09.243024   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:09.243092   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:09.278664   57719 cri.go:89] found id: ""
	I0410 22:50:09.278687   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.278694   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:09.278700   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:09.278762   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:09.313335   57719 cri.go:89] found id: ""
	I0410 22:50:09.313359   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.313367   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:09.313372   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:09.313419   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:09.351160   57719 cri.go:89] found id: ""
	I0410 22:50:09.351195   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.351206   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:09.351225   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:09.351239   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:09.425989   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:09.426015   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:09.426033   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:09.505189   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:09.505223   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:09.549619   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:09.549651   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:09.604322   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:09.604360   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:08.520115   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:11.018253   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:08.649190   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:10.650453   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:10.903726   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:13.401154   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:12.119780   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:12.135377   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:12.135458   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:12.178105   57719 cri.go:89] found id: ""
	I0410 22:50:12.178129   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.178138   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:12.178144   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:12.178207   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:12.217369   57719 cri.go:89] found id: ""
	I0410 22:50:12.217397   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.217409   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:12.217424   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:12.217488   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:12.254185   57719 cri.go:89] found id: ""
	I0410 22:50:12.254213   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.254222   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:12.254230   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:12.254291   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:12.295007   57719 cri.go:89] found id: ""
	I0410 22:50:12.295038   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.295048   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:12.295057   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:12.295125   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:12.334620   57719 cri.go:89] found id: ""
	I0410 22:50:12.334644   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.334651   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:12.334657   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:12.334707   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:12.371217   57719 cri.go:89] found id: ""
	I0410 22:50:12.371241   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.371249   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:12.371255   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:12.371302   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:12.409571   57719 cri.go:89] found id: ""
	I0410 22:50:12.409599   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.409608   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:12.409617   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:12.409675   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:12.453133   57719 cri.go:89] found id: ""
	I0410 22:50:12.453159   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.453169   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:12.453180   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:12.453194   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:12.505322   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:12.505360   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:12.520284   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:12.520315   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:12.608057   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:12.608082   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:12.608097   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:12.693240   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:12.693274   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:15.244628   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:15.261915   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:15.262020   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:15.302874   57719 cri.go:89] found id: ""
	I0410 22:50:15.302903   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.302910   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:15.302916   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:15.302973   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:15.347492   57719 cri.go:89] found id: ""
	I0410 22:50:15.347518   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.347527   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:15.347534   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:15.347598   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:15.394156   57719 cri.go:89] found id: ""
	I0410 22:50:15.394188   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.394198   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:15.394205   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:15.394265   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:13.518316   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:15.520507   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:13.150145   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:15.651083   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:15.401582   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:17.901179   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:15.437656   57719 cri.go:89] found id: ""
	I0410 22:50:15.437682   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.437690   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:15.437695   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:15.437748   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:15.475658   57719 cri.go:89] found id: ""
	I0410 22:50:15.475686   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.475697   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:15.475704   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:15.475765   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:15.517908   57719 cri.go:89] found id: ""
	I0410 22:50:15.517930   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.517937   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:15.517942   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:15.517991   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:15.560083   57719 cri.go:89] found id: ""
	I0410 22:50:15.560108   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.560117   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:15.560123   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:15.560178   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:15.603967   57719 cri.go:89] found id: ""
	I0410 22:50:15.603994   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.604002   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:15.604013   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:15.604028   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:15.659994   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:15.660029   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:15.675627   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:15.675658   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:15.761297   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:15.761320   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:15.761339   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:15.839225   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:15.839265   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:18.386062   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:18.399609   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:18.399677   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:18.443002   57719 cri.go:89] found id: ""
	I0410 22:50:18.443030   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.443040   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:18.443048   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:18.443106   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:18.485089   57719 cri.go:89] found id: ""
	I0410 22:50:18.485121   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.485132   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:18.485140   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:18.485200   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:18.524310   57719 cri.go:89] found id: ""
	I0410 22:50:18.524338   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.524347   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:18.524354   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:18.524412   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:18.563535   57719 cri.go:89] found id: ""
	I0410 22:50:18.563573   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.563582   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:18.563587   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:18.563634   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:18.600451   57719 cri.go:89] found id: ""
	I0410 22:50:18.600478   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.600487   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:18.600495   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:18.600562   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:18.640445   57719 cri.go:89] found id: ""
	I0410 22:50:18.640472   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.640480   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:18.640485   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:18.640550   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:18.677691   57719 cri.go:89] found id: ""
	I0410 22:50:18.677725   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.677746   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:18.677754   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:18.677817   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:18.716753   57719 cri.go:89] found id: ""
	I0410 22:50:18.716850   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.716876   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:18.716897   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:18.716918   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:18.804099   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:18.804130   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:18.804144   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:18.883569   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:18.883611   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:18.930014   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:18.930045   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:18.980029   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:18.980065   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:18.018924   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:20.020820   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:18.151029   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:20.650000   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:19.904069   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:22.401462   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:24.401892   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:21.495499   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:21.511001   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:21.511075   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:21.551469   57719 cri.go:89] found id: ""
	I0410 22:50:21.551511   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.551522   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:21.551540   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:21.551605   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:21.590539   57719 cri.go:89] found id: ""
	I0410 22:50:21.590570   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.590580   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:21.590587   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:21.590654   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:21.629005   57719 cri.go:89] found id: ""
	I0410 22:50:21.629030   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.629042   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:21.629048   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:21.629108   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:21.669745   57719 cri.go:89] found id: ""
	I0410 22:50:21.669767   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.669774   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:21.669780   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:21.669834   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:21.707806   57719 cri.go:89] found id: ""
	I0410 22:50:21.707831   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.707839   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:21.707844   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:21.707892   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:21.746698   57719 cri.go:89] found id: ""
	I0410 22:50:21.746727   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.746736   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:21.746742   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:21.746802   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:21.783048   57719 cri.go:89] found id: ""
	I0410 22:50:21.783070   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.783079   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:21.783084   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:21.783131   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:21.822457   57719 cri.go:89] found id: ""
	I0410 22:50:21.822484   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.822492   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:21.822500   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:21.822513   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:21.894706   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:21.894747   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:21.909861   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:21.909903   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:21.999344   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:21.999370   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:21.999386   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:22.080004   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:22.080042   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:24.620924   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:24.634937   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:24.634999   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:24.686619   57719 cri.go:89] found id: ""
	I0410 22:50:24.686644   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.686655   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:24.686662   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:24.686744   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:24.723632   57719 cri.go:89] found id: ""
	I0410 22:50:24.723658   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.723667   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:24.723675   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:24.723738   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:24.760708   57719 cri.go:89] found id: ""
	I0410 22:50:24.760739   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.760750   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:24.760757   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:24.760804   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:24.795680   57719 cri.go:89] found id: ""
	I0410 22:50:24.795712   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.795722   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:24.795729   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:24.795793   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:24.833033   57719 cri.go:89] found id: ""
	I0410 22:50:24.833063   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.833074   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:24.833082   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:24.833130   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:24.872840   57719 cri.go:89] found id: ""
	I0410 22:50:24.872864   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.872871   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:24.872877   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:24.872936   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:24.915640   57719 cri.go:89] found id: ""
	I0410 22:50:24.915678   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.915688   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:24.915696   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:24.915755   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:24.957164   57719 cri.go:89] found id: ""
	I0410 22:50:24.957207   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.957219   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:24.957230   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:24.957244   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:25.006551   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:25.006601   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:25.021623   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:25.021649   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:25.094699   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:25.094722   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:25.094741   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:25.181280   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:25.181316   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:22.518442   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:25.018206   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:22.650481   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:25.151162   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:26.904127   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:29.400642   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:27.723475   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:27.737294   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:27.737381   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:27.776098   57719 cri.go:89] found id: ""
	I0410 22:50:27.776126   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.776138   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:27.776146   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:27.776203   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:27.814324   57719 cri.go:89] found id: ""
	I0410 22:50:27.814352   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.814364   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:27.814371   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:27.814447   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:27.849573   57719 cri.go:89] found id: ""
	I0410 22:50:27.849603   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.849614   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:27.849621   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:27.849682   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:27.888904   57719 cri.go:89] found id: ""
	I0410 22:50:27.888932   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.888940   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:27.888946   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:27.888993   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:27.931772   57719 cri.go:89] found id: ""
	I0410 22:50:27.931800   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.931812   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:27.931821   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:27.931881   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:27.975633   57719 cri.go:89] found id: ""
	I0410 22:50:27.975666   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.975676   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:27.975684   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:27.975736   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:28.012251   57719 cri.go:89] found id: ""
	I0410 22:50:28.012280   57719 logs.go:276] 0 containers: []
	W0410 22:50:28.012290   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:28.012298   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:28.012364   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:28.048848   57719 cri.go:89] found id: ""
	I0410 22:50:28.048886   57719 logs.go:276] 0 containers: []
	W0410 22:50:28.048898   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:28.048908   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:28.048923   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:28.102215   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:28.102257   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:28.118052   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:28.118081   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:28.190738   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:28.190762   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:28.190777   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:28.269294   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:28.269330   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:27.519211   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:29.521111   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:32.017915   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:27.651922   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:30.150852   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:31.401210   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:33.902054   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:30.833927   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:30.848196   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:30.848266   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:30.886077   57719 cri.go:89] found id: ""
	I0410 22:50:30.886117   57719 logs.go:276] 0 containers: []
	W0410 22:50:30.886127   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:30.886133   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:30.886179   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:30.924638   57719 cri.go:89] found id: ""
	I0410 22:50:30.924668   57719 logs.go:276] 0 containers: []
	W0410 22:50:30.924678   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:30.924686   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:30.924762   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:30.961106   57719 cri.go:89] found id: ""
	I0410 22:50:30.961136   57719 logs.go:276] 0 containers: []
	W0410 22:50:30.961147   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:30.961154   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:30.961213   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:31.001374   57719 cri.go:89] found id: ""
	I0410 22:50:31.001412   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.001427   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:31.001434   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:31.001498   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:31.038928   57719 cri.go:89] found id: ""
	I0410 22:50:31.038961   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.038971   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:31.038980   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:31.039057   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:31.077033   57719 cri.go:89] found id: ""
	I0410 22:50:31.077067   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.077076   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:31.077083   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:31.077139   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:31.115227   57719 cri.go:89] found id: ""
	I0410 22:50:31.115257   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.115266   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:31.115273   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:31.115335   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:31.157339   57719 cri.go:89] found id: ""
	I0410 22:50:31.157372   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.157382   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:31.157393   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:31.157409   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:31.198742   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:31.198770   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:31.255388   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:31.255422   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:31.272018   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:31.272048   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:31.344503   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:31.344524   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:31.344541   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:33.925749   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:33.939402   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:33.939475   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:33.976070   57719 cri.go:89] found id: ""
	I0410 22:50:33.976093   57719 logs.go:276] 0 containers: []
	W0410 22:50:33.976100   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:33.976106   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:33.976172   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:34.013723   57719 cri.go:89] found id: ""
	I0410 22:50:34.013748   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.013758   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:34.013765   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:34.013821   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:34.062678   57719 cri.go:89] found id: ""
	I0410 22:50:34.062704   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.062712   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:34.062718   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:34.062774   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:34.123007   57719 cri.go:89] found id: ""
	I0410 22:50:34.123038   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.123046   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:34.123052   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:34.123096   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:34.188811   57719 cri.go:89] found id: ""
	I0410 22:50:34.188841   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.188852   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:34.188859   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:34.188949   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:34.223585   57719 cri.go:89] found id: ""
	I0410 22:50:34.223609   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.223618   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:34.223625   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:34.223680   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:34.260004   57719 cri.go:89] found id: ""
	I0410 22:50:34.260028   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.260036   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:34.260041   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:34.260096   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:34.303064   57719 cri.go:89] found id: ""
	I0410 22:50:34.303093   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.303104   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:34.303115   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:34.303134   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:34.359105   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:34.359142   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:34.375420   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:34.375450   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:34.449619   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:34.449645   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:34.449660   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:34.534214   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:34.534248   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:34.518609   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:37.016973   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:32.649917   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:34.661652   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:37.150648   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:36.401988   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:38.901505   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:37.076525   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:37.090789   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:37.090849   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:37.130848   57719 cri.go:89] found id: ""
	I0410 22:50:37.130881   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.130893   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:37.130900   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:37.130967   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:37.170158   57719 cri.go:89] found id: ""
	I0410 22:50:37.170181   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.170188   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:37.170194   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:37.170269   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:37.210238   57719 cri.go:89] found id: ""
	I0410 22:50:37.210264   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.210274   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:37.210282   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:37.210328   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:37.256763   57719 cri.go:89] found id: ""
	I0410 22:50:37.256789   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.256800   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:37.256807   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:37.256875   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:37.295323   57719 cri.go:89] found id: ""
	I0410 22:50:37.295355   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.295364   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:37.295372   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:37.295443   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:37.334066   57719 cri.go:89] found id: ""
	I0410 22:50:37.334094   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.334105   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:37.334113   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:37.334170   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:37.374428   57719 cri.go:89] found id: ""
	I0410 22:50:37.374458   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.374477   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:37.374485   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:37.374544   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:37.412114   57719 cri.go:89] found id: ""
	I0410 22:50:37.412142   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.412152   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:37.412161   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:37.412174   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:37.453693   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:37.453717   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:37.505484   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:37.505524   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:37.523645   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:37.523672   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:37.595107   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:37.595134   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:37.595150   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:40.180649   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:40.195168   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:40.195243   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:40.240130   57719 cri.go:89] found id: ""
	I0410 22:50:40.240160   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.240169   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:40.240175   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:40.240241   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:40.276366   57719 cri.go:89] found id: ""
	I0410 22:50:40.276390   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.276406   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:40.276412   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:40.276466   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:40.314991   57719 cri.go:89] found id: ""
	I0410 22:50:40.315016   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.315023   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:40.315029   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:40.315075   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:40.354301   57719 cri.go:89] found id: ""
	I0410 22:50:40.354331   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.354342   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:40.354349   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:40.354414   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:40.393093   57719 cri.go:89] found id: ""
	I0410 22:50:40.393125   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.393135   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:40.393143   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:40.393204   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:39.021170   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:41.518285   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:39.650047   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:42.150206   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:40.902024   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:42.904180   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:40.429641   57719 cri.go:89] found id: ""
	I0410 22:50:40.429665   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.429674   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:40.429680   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:40.429727   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:40.468184   57719 cri.go:89] found id: ""
	I0410 22:50:40.468213   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.468224   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:40.468232   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:40.468304   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:40.505586   57719 cri.go:89] found id: ""
	I0410 22:50:40.505616   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.505627   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:40.505637   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:40.505652   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:40.562078   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:40.562119   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:40.578135   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:40.578213   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:40.659018   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:40.659047   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:40.659061   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:40.746434   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:40.746478   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:43.287852   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:43.301797   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:43.301869   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:43.339778   57719 cri.go:89] found id: ""
	I0410 22:50:43.339813   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.339822   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:43.339829   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:43.339893   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:43.378716   57719 cri.go:89] found id: ""
	I0410 22:50:43.378748   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.378759   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:43.378767   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:43.378836   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:43.417128   57719 cri.go:89] found id: ""
	I0410 22:50:43.417152   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.417163   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:43.417171   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:43.417234   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:43.459577   57719 cri.go:89] found id: ""
	I0410 22:50:43.459608   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.459617   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:43.459623   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:43.459678   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:43.497519   57719 cri.go:89] found id: ""
	I0410 22:50:43.497551   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.497561   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:43.497566   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:43.497620   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:43.534400   57719 cri.go:89] found id: ""
	I0410 22:50:43.534433   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.534444   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:43.534451   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:43.534540   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:43.574213   57719 cri.go:89] found id: ""
	I0410 22:50:43.574242   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.574253   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:43.574283   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:43.574344   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:43.611078   57719 cri.go:89] found id: ""
	I0410 22:50:43.611106   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.611113   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:43.611121   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:43.611137   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:43.698166   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:43.698202   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:43.749368   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:43.749395   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:43.801584   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:43.801621   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:43.817012   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:43.817050   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:43.892325   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:43.518660   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:46.017804   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:44.650389   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:46.650560   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:45.401723   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:47.901852   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:46.393325   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:46.407985   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:46.408045   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:46.442704   57719 cri.go:89] found id: ""
	I0410 22:50:46.442735   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.442745   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:46.442753   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:46.442821   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:46.485582   57719 cri.go:89] found id: ""
	I0410 22:50:46.485611   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.485618   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:46.485625   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:46.485683   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:46.524199   57719 cri.go:89] found id: ""
	I0410 22:50:46.524227   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.524234   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:46.524240   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:46.524288   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:46.560655   57719 cri.go:89] found id: ""
	I0410 22:50:46.560685   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.560694   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:46.560701   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:46.560839   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:46.596617   57719 cri.go:89] found id: ""
	I0410 22:50:46.596646   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.596658   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:46.596666   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:46.596739   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:46.634316   57719 cri.go:89] found id: ""
	I0410 22:50:46.634339   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.634347   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:46.634352   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:46.634399   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:46.671466   57719 cri.go:89] found id: ""
	I0410 22:50:46.671493   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.671502   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:46.671509   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:46.671582   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:46.709228   57719 cri.go:89] found id: ""
	I0410 22:50:46.709254   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.709265   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:46.709275   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:46.709291   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:46.761329   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:46.761366   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:46.778265   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:46.778288   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:46.851092   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:46.851113   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:46.851125   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:46.929181   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:46.929223   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:49.471285   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:49.485474   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:49.485551   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:49.523799   57719 cri.go:89] found id: ""
	I0410 22:50:49.523826   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.523838   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:49.523846   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:49.523899   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:49.562102   57719 cri.go:89] found id: ""
	I0410 22:50:49.562129   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.562137   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:49.562143   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:49.562196   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:49.600182   57719 cri.go:89] found id: ""
	I0410 22:50:49.600204   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.600211   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:49.600216   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:49.600262   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:49.640002   57719 cri.go:89] found id: ""
	I0410 22:50:49.640028   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.640039   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:49.640047   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:49.640111   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:49.678815   57719 cri.go:89] found id: ""
	I0410 22:50:49.678847   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.678858   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:49.678866   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:49.678929   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:49.716933   57719 cri.go:89] found id: ""
	I0410 22:50:49.716959   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.716969   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:49.716976   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:49.717039   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:49.756018   57719 cri.go:89] found id: ""
	I0410 22:50:49.756050   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.756060   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:49.756068   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:49.756132   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:49.802066   57719 cri.go:89] found id: ""
	I0410 22:50:49.802094   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.802103   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:49.802110   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:49.802123   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:49.856363   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:49.856417   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:49.872297   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:49.872330   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:49.950152   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:49.950174   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:49.950185   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:50.031251   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:50.031291   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:48.517547   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:50.517942   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:49.150498   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:51.151491   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:50.401650   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:52.401866   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:52.574794   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:52.589052   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:52.589117   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:52.625911   57719 cri.go:89] found id: ""
	I0410 22:50:52.625941   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.625952   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:52.625960   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:52.626020   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:52.668749   57719 cri.go:89] found id: ""
	I0410 22:50:52.668773   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.668781   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:52.668787   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:52.668835   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:52.713420   57719 cri.go:89] found id: ""
	I0410 22:50:52.713447   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.713457   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:52.713473   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:52.713538   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:52.750265   57719 cri.go:89] found id: ""
	I0410 22:50:52.750294   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.750301   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:52.750307   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:52.750354   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:52.787552   57719 cri.go:89] found id: ""
	I0410 22:50:52.787586   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.787597   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:52.787604   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:52.787670   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:52.827988   57719 cri.go:89] found id: ""
	I0410 22:50:52.828013   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.828020   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:52.828026   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:52.828072   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:52.864115   57719 cri.go:89] found id: ""
	I0410 22:50:52.864144   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.864155   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:52.864161   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:52.864222   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:52.906673   57719 cri.go:89] found id: ""
	I0410 22:50:52.906702   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.906712   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:52.906723   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:52.906742   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:52.960842   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:52.960892   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:52.976084   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:52.976114   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:53.052612   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:53.052638   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:53.052656   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:53.132465   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:53.132518   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:53.018789   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:55.518169   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:53.154117   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:55.653267   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:54.903797   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:57.401445   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:55.676947   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:55.691098   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:55.691183   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:55.728711   57719 cri.go:89] found id: ""
	I0410 22:50:55.728740   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.728750   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:55.728758   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:55.728824   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:55.768540   57719 cri.go:89] found id: ""
	I0410 22:50:55.768568   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.768578   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:55.768584   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:55.768649   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:55.806901   57719 cri.go:89] found id: ""
	I0410 22:50:55.806928   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.806938   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:55.806945   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:55.807019   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:55.846777   57719 cri.go:89] found id: ""
	I0410 22:50:55.846807   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.846816   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:55.846822   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:55.846873   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:55.887143   57719 cri.go:89] found id: ""
	I0410 22:50:55.887172   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.887181   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:55.887186   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:55.887241   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:55.929008   57719 cri.go:89] found id: ""
	I0410 22:50:55.929032   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.929040   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:55.929046   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:55.929098   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:55.969496   57719 cri.go:89] found id: ""
	I0410 22:50:55.969526   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.969536   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:55.969544   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:55.969605   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:56.007786   57719 cri.go:89] found id: ""
	I0410 22:50:56.007818   57719 logs.go:276] 0 containers: []
	W0410 22:50:56.007828   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:56.007838   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:56.007854   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:56.061616   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:56.061653   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:56.078664   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:56.078689   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:56.165015   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:56.165037   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:56.165053   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:56.241928   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:56.241971   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:58.785955   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:58.799544   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:58.799604   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:58.837234   57719 cri.go:89] found id: ""
	I0410 22:50:58.837264   57719 logs.go:276] 0 containers: []
	W0410 22:50:58.837275   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:58.837283   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:58.837350   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:58.877818   57719 cri.go:89] found id: ""
	I0410 22:50:58.877854   57719 logs.go:276] 0 containers: []
	W0410 22:50:58.877861   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:58.877867   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:58.877921   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:58.919705   57719 cri.go:89] found id: ""
	I0410 22:50:58.919729   57719 logs.go:276] 0 containers: []
	W0410 22:50:58.919740   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:58.919747   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:58.919809   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:58.957995   57719 cri.go:89] found id: ""
	I0410 22:50:58.958020   57719 logs.go:276] 0 containers: []
	W0410 22:50:58.958029   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:58.958036   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:58.958091   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:58.999966   57719 cri.go:89] found id: ""
	I0410 22:50:58.999995   57719 logs.go:276] 0 containers: []
	W0410 22:50:59.000008   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:59.000016   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:59.000088   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:59.040516   57719 cri.go:89] found id: ""
	I0410 22:50:59.040541   57719 logs.go:276] 0 containers: []
	W0410 22:50:59.040552   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:59.040560   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:59.040623   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:59.078869   57719 cri.go:89] found id: ""
	I0410 22:50:59.078899   57719 logs.go:276] 0 containers: []
	W0410 22:50:59.078908   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:59.078913   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:59.078961   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:59.116637   57719 cri.go:89] found id: ""
	I0410 22:50:59.116663   57719 logs.go:276] 0 containers: []
	W0410 22:50:59.116670   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:59.116679   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:59.116697   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:59.195852   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:59.195892   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:59.243256   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:59.243282   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:59.299195   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:59.299263   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:59.314512   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:59.314537   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:59.386468   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:58.016995   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:00.018205   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:58.151543   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:00.650140   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:59.901858   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:01.902933   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:04.402128   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:01.886907   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:01.905169   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:01.905251   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:01.944154   57719 cri.go:89] found id: ""
	I0410 22:51:01.944187   57719 logs.go:276] 0 containers: []
	W0410 22:51:01.944198   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:01.944205   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:01.944268   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:01.982743   57719 cri.go:89] found id: ""
	I0410 22:51:01.982778   57719 logs.go:276] 0 containers: []
	W0410 22:51:01.982789   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:01.982797   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:01.982864   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:02.020072   57719 cri.go:89] found id: ""
	I0410 22:51:02.020094   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.020102   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:02.020159   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:02.020213   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:02.064250   57719 cri.go:89] found id: ""
	I0410 22:51:02.064273   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.064280   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:02.064286   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:02.064339   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:02.105013   57719 cri.go:89] found id: ""
	I0410 22:51:02.105045   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.105054   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:02.105060   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:02.105106   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:02.145664   57719 cri.go:89] found id: ""
	I0410 22:51:02.145689   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.145695   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:02.145701   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:02.145759   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:02.189752   57719 cri.go:89] found id: ""
	I0410 22:51:02.189831   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.189850   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:02.189857   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:02.189929   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:02.228315   57719 cri.go:89] found id: ""
	I0410 22:51:02.228347   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.228358   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:02.228374   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:02.228390   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:02.281425   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:02.281460   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:02.296003   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:02.296031   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:02.389572   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:02.389599   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:02.389613   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:02.475881   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:02.475916   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:05.022037   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:05.037242   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:05.037304   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:05.073656   57719 cri.go:89] found id: ""
	I0410 22:51:05.073687   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.073698   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:05.073705   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:05.073767   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:05.114321   57719 cri.go:89] found id: ""
	I0410 22:51:05.114348   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.114356   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:05.114361   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:05.114430   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:05.153119   57719 cri.go:89] found id: ""
	I0410 22:51:05.153156   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.153164   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:05.153170   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:05.153230   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:05.193393   57719 cri.go:89] found id: ""
	I0410 22:51:05.193420   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.193428   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:05.193433   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:05.193479   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:05.229826   57719 cri.go:89] found id: ""
	I0410 22:51:05.229853   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.229861   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:05.229867   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:05.229915   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:05.265511   57719 cri.go:89] found id: ""
	I0410 22:51:05.265544   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.265555   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:05.265562   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:05.265627   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:05.302257   57719 cri.go:89] found id: ""
	I0410 22:51:05.302287   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.302297   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:05.302305   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:05.302386   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:05.347344   57719 cri.go:89] found id: ""
	I0410 22:51:05.347372   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.347380   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:05.347388   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:05.347399   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:05.421796   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:05.421817   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:05.421829   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:02.521499   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:05.017660   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:07.017945   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:02.651104   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:05.150286   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:07.150565   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:06.402266   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:08.406456   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:05.501803   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:05.501839   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:05.549161   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:05.549195   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:05.599598   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:05.599633   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:08.115679   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:08.130273   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:08.130350   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:08.172302   57719 cri.go:89] found id: ""
	I0410 22:51:08.172328   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.172335   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:08.172342   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:08.172390   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:08.220789   57719 cri.go:89] found id: ""
	I0410 22:51:08.220812   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.220819   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:08.220825   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:08.220874   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:08.258299   57719 cri.go:89] found id: ""
	I0410 22:51:08.258328   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.258341   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:08.258349   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:08.258404   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:08.297698   57719 cri.go:89] found id: ""
	I0410 22:51:08.297726   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.297733   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:08.297739   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:08.297787   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:08.335564   57719 cri.go:89] found id: ""
	I0410 22:51:08.335595   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.335605   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:08.335613   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:08.335671   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:08.373340   57719 cri.go:89] found id: ""
	I0410 22:51:08.373367   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.373377   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:08.373384   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:08.373481   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:08.413961   57719 cri.go:89] found id: ""
	I0410 22:51:08.413984   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.413993   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:08.414001   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:08.414062   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:08.459449   57719 cri.go:89] found id: ""
	I0410 22:51:08.459481   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.459492   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:08.459505   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:08.459521   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:08.518061   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:08.518103   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:08.533653   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:08.533680   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:08.619882   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:08.619917   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:08.619932   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:08.696329   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:08.696364   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:09.518298   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:11.518877   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:09.650387   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:11.650614   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:10.902634   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:13.402009   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:11.256846   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:11.271521   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:11.271582   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:11.312829   57719 cri.go:89] found id: ""
	I0410 22:51:11.312851   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.312869   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:11.312876   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:11.312930   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:11.355183   57719 cri.go:89] found id: ""
	I0410 22:51:11.355210   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.355220   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:11.355227   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:11.355287   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:11.394345   57719 cri.go:89] found id: ""
	I0410 22:51:11.394376   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.394388   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:11.394396   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:11.394460   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:11.434128   57719 cri.go:89] found id: ""
	I0410 22:51:11.434155   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.434163   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:11.434169   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:11.434219   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:11.473160   57719 cri.go:89] found id: ""
	I0410 22:51:11.473189   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.473201   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:11.473208   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:11.473278   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:11.513782   57719 cri.go:89] found id: ""
	I0410 22:51:11.513815   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.513826   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:11.513835   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:11.513891   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:11.556057   57719 cri.go:89] found id: ""
	I0410 22:51:11.556085   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.556093   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:11.556100   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:11.556147   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:11.594557   57719 cri.go:89] found id: ""
	I0410 22:51:11.594579   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.594586   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:11.594594   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:11.594609   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:11.672795   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:11.672841   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:11.716011   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:11.716046   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:11.769372   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:11.769413   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:11.784589   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:11.784617   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:11.857051   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:14.358019   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:14.372116   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:14.372192   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:14.412020   57719 cri.go:89] found id: ""
	I0410 22:51:14.412049   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.412061   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:14.412068   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:14.412128   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:14.450317   57719 cri.go:89] found id: ""
	I0410 22:51:14.450349   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.450360   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:14.450368   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:14.450426   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:14.509080   57719 cri.go:89] found id: ""
	I0410 22:51:14.509104   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.509110   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:14.509116   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:14.509185   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:14.561540   57719 cri.go:89] found id: ""
	I0410 22:51:14.561572   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.561583   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:14.561590   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:14.561670   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:14.622498   57719 cri.go:89] found id: ""
	I0410 22:51:14.622528   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.622538   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:14.622546   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:14.622606   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:14.678451   57719 cri.go:89] found id: ""
	I0410 22:51:14.678481   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.678490   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:14.678498   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:14.678560   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:14.720264   57719 cri.go:89] found id: ""
	I0410 22:51:14.720302   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.720315   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:14.720323   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:14.720388   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:14.758039   57719 cri.go:89] found id: ""
	I0410 22:51:14.758063   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.758071   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:14.758079   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:14.758090   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:14.808111   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:14.808171   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:14.825444   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:14.825487   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:14.906859   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:14.906884   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:14.906899   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:14.995176   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:14.995225   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:14.017397   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:16.017624   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:14.149898   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:16.150320   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:15.901542   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:17.902391   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:17.541159   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:17.556679   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:17.556749   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:17.595839   57719 cri.go:89] found id: ""
	I0410 22:51:17.595869   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.595880   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:17.595895   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:17.595954   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:17.633921   57719 cri.go:89] found id: ""
	I0410 22:51:17.633947   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.633957   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:17.633964   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:17.634033   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:17.673467   57719 cri.go:89] found id: ""
	I0410 22:51:17.673493   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.673501   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:17.673507   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:17.673554   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:17.709631   57719 cri.go:89] found id: ""
	I0410 22:51:17.709660   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.709670   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:17.709679   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:17.709739   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:17.760852   57719 cri.go:89] found id: ""
	I0410 22:51:17.760880   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.760893   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:17.760908   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:17.760969   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:17.798074   57719 cri.go:89] found id: ""
	I0410 22:51:17.798099   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.798108   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:17.798117   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:17.798178   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:17.835807   57719 cri.go:89] found id: ""
	I0410 22:51:17.835839   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.835854   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:17.835863   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:17.835935   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:17.876812   57719 cri.go:89] found id: ""
	I0410 22:51:17.876846   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.876856   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:17.876868   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:17.876882   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:17.891121   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:17.891149   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:17.966241   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:17.966264   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:17.966277   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:18.042633   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:18.042667   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:18.088294   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:18.088327   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:18.518103   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:20.519397   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:18.650784   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:21.150770   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:20.403127   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:22.901329   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:20.647016   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:20.662573   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:20.662640   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:20.701147   57719 cri.go:89] found id: ""
	I0410 22:51:20.701173   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.701184   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:20.701191   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:20.701252   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:20.739005   57719 cri.go:89] found id: ""
	I0410 22:51:20.739038   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.739049   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:20.739057   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:20.739112   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:20.776335   57719 cri.go:89] found id: ""
	I0410 22:51:20.776365   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.776379   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:20.776386   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:20.776471   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:20.814755   57719 cri.go:89] found id: ""
	I0410 22:51:20.814789   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.814800   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:20.814808   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:20.814867   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:20.853872   57719 cri.go:89] found id: ""
	I0410 22:51:20.853897   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.853904   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:20.853910   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:20.853958   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:20.891616   57719 cri.go:89] found id: ""
	I0410 22:51:20.891648   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.891656   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:20.891662   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:20.891710   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:20.930285   57719 cri.go:89] found id: ""
	I0410 22:51:20.930316   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.930326   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:20.930341   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:20.930398   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:20.967857   57719 cri.go:89] found id: ""
	I0410 22:51:20.967894   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.967904   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:20.967913   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:20.967934   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:21.053166   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:21.053201   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:21.098860   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:21.098888   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:21.150395   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:21.150430   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:21.164707   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:21.164737   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:21.251010   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:23.751441   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:23.769949   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:23.770014   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:23.809652   57719 cri.go:89] found id: ""
	I0410 22:51:23.809678   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.809686   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:23.809692   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:23.809740   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:23.847331   57719 cri.go:89] found id: ""
	I0410 22:51:23.847364   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.847374   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:23.847383   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:23.847445   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:23.889459   57719 cri.go:89] found id: ""
	I0410 22:51:23.889488   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.889498   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:23.889505   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:23.889564   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:23.932683   57719 cri.go:89] found id: ""
	I0410 22:51:23.932712   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.932720   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:23.932727   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:23.932787   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:23.974161   57719 cri.go:89] found id: ""
	I0410 22:51:23.974187   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.974194   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:23.974200   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:23.974253   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:24.013058   57719 cri.go:89] found id: ""
	I0410 22:51:24.013087   57719 logs.go:276] 0 containers: []
	W0410 22:51:24.013098   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:24.013106   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:24.013169   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:24.052556   57719 cri.go:89] found id: ""
	I0410 22:51:24.052582   57719 logs.go:276] 0 containers: []
	W0410 22:51:24.052590   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:24.052596   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:24.052643   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:24.089940   57719 cri.go:89] found id: ""
	I0410 22:51:24.089967   57719 logs.go:276] 0 containers: []
	W0410 22:51:24.089974   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:24.089982   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:24.089992   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:24.133198   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:24.133226   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:24.186615   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:24.186651   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:24.200559   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:24.200586   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:24.277061   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:24.277093   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:24.277109   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:23.016887   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:25.018325   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:27.018514   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:23.650669   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:26.149198   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:24.901704   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:26.902227   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:28.902337   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:26.855354   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:26.870269   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:26.870329   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:26.910056   57719 cri.go:89] found id: ""
	I0410 22:51:26.910084   57719 logs.go:276] 0 containers: []
	W0410 22:51:26.910094   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:26.910101   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:26.910163   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:26.949646   57719 cri.go:89] found id: ""
	I0410 22:51:26.949674   57719 logs.go:276] 0 containers: []
	W0410 22:51:26.949684   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:26.949690   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:26.949759   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:26.990945   57719 cri.go:89] found id: ""
	I0410 22:51:26.990970   57719 logs.go:276] 0 containers: []
	W0410 22:51:26.990977   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:26.990984   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:26.991053   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:27.029464   57719 cri.go:89] found id: ""
	I0410 22:51:27.029491   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.029500   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:27.029505   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:27.029562   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:27.072194   57719 cri.go:89] found id: ""
	I0410 22:51:27.072235   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.072260   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:27.072270   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:27.072339   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:27.106942   57719 cri.go:89] found id: ""
	I0410 22:51:27.106969   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.106979   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:27.106985   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:27.107045   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:27.144851   57719 cri.go:89] found id: ""
	I0410 22:51:27.144885   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.144894   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:27.144909   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:27.144970   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:27.188138   57719 cri.go:89] found id: ""
	I0410 22:51:27.188166   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.188178   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:27.188189   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:27.188204   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:27.241911   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:27.241943   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:27.255296   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:27.255322   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:27.327638   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:27.327663   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:27.327678   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:27.409048   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:27.409083   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:29.960093   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:29.975583   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:29.975647   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:30.018120   57719 cri.go:89] found id: ""
	I0410 22:51:30.018149   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.018159   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:30.018166   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:30.018225   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:30.055487   57719 cri.go:89] found id: ""
	I0410 22:51:30.055511   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.055518   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:30.055524   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:30.055573   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:30.093723   57719 cri.go:89] found id: ""
	I0410 22:51:30.093749   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.093756   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:30.093761   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:30.093808   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:30.138278   57719 cri.go:89] found id: ""
	I0410 22:51:30.138306   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.138317   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:30.138324   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:30.138385   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:30.174454   57719 cri.go:89] found id: ""
	I0410 22:51:30.174484   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.174495   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:30.174502   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:30.174573   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:30.213189   57719 cri.go:89] found id: ""
	I0410 22:51:30.213214   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.213221   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:30.213227   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:30.213272   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:30.253264   57719 cri.go:89] found id: ""
	I0410 22:51:30.253294   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.253304   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:30.253309   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:30.253357   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:30.289729   57719 cri.go:89] found id: ""
	I0410 22:51:30.289755   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.289767   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:30.289777   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:30.289793   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:30.303387   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:30.303416   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:30.381294   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:30.381315   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:30.381331   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:29.019226   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:31.519681   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:28.150621   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:30.649807   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:30.903662   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:33.401827   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:30.468072   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:30.468110   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:30.508761   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:30.508794   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:33.061654   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:33.077072   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:33.077146   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:33.113753   57719 cri.go:89] found id: ""
	I0410 22:51:33.113781   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.113791   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:33.113798   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:33.113848   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:33.149212   57719 cri.go:89] found id: ""
	I0410 22:51:33.149238   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.149249   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:33.149256   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:33.149321   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:33.185619   57719 cri.go:89] found id: ""
	I0410 22:51:33.185649   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.185659   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:33.185667   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:33.185725   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:33.222270   57719 cri.go:89] found id: ""
	I0410 22:51:33.222301   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.222313   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:33.222320   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:33.222375   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:33.258594   57719 cri.go:89] found id: ""
	I0410 22:51:33.258624   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.258636   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:33.258642   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:33.258689   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:33.298326   57719 cri.go:89] found id: ""
	I0410 22:51:33.298360   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.298368   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:33.298374   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:33.298438   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:33.337407   57719 cri.go:89] found id: ""
	I0410 22:51:33.337438   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.337449   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:33.337456   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:33.337520   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:33.374971   57719 cri.go:89] found id: ""
	I0410 22:51:33.375003   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.375014   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:33.375024   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:33.375039   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:33.415256   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:33.415288   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:33.467895   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:33.467929   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:33.484604   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:33.484639   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:33.562267   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:33.562288   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:33.562299   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:34.017685   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:36.519093   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:32.650396   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:35.150200   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:35.902810   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:38.401463   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:36.142628   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:36.157825   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:36.157883   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:36.199418   57719 cri.go:89] found id: ""
	I0410 22:51:36.199446   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.199456   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:36.199463   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:36.199523   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:36.238136   57719 cri.go:89] found id: ""
	I0410 22:51:36.238166   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.238174   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:36.238180   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:36.238229   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:36.273995   57719 cri.go:89] found id: ""
	I0410 22:51:36.274026   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.274037   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:36.274049   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:36.274110   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:36.311007   57719 cri.go:89] found id: ""
	I0410 22:51:36.311039   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.311049   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:36.311057   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:36.311122   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:36.351062   57719 cri.go:89] found id: ""
	I0410 22:51:36.351086   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.351093   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:36.351099   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:36.351152   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:36.388660   57719 cri.go:89] found id: ""
	I0410 22:51:36.388689   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.388703   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:36.388711   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:36.388762   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:36.428715   57719 cri.go:89] found id: ""
	I0410 22:51:36.428753   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.428761   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:36.428767   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:36.428831   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:36.467186   57719 cri.go:89] found id: ""
	I0410 22:51:36.467213   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.467220   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:36.467228   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:36.467239   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:36.521831   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:36.521860   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:36.536929   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:36.536957   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:36.614624   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:36.614647   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:36.614659   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:36.694604   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:36.694646   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:39.240039   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:39.255177   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:39.255262   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:39.293063   57719 cri.go:89] found id: ""
	I0410 22:51:39.293091   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.293113   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:39.293120   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:39.293181   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:39.331603   57719 cri.go:89] found id: ""
	I0410 22:51:39.331631   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.331639   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:39.331645   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:39.331697   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:39.372881   57719 cri.go:89] found id: ""
	I0410 22:51:39.372908   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.372919   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:39.372926   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:39.372987   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:39.417399   57719 cri.go:89] found id: ""
	I0410 22:51:39.417425   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.417435   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:39.417442   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:39.417503   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:39.458836   57719 cri.go:89] found id: ""
	I0410 22:51:39.458868   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.458877   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:39.458882   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:39.458932   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:39.496436   57719 cri.go:89] found id: ""
	I0410 22:51:39.496460   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.496467   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:39.496474   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:39.496532   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:39.534649   57719 cri.go:89] found id: ""
	I0410 22:51:39.534681   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.534690   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:39.534695   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:39.534754   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:39.571677   57719 cri.go:89] found id: ""
	I0410 22:51:39.571698   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.571705   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:39.571714   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:39.571725   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:39.621445   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:39.621482   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:39.676341   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:39.676382   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:39.691543   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:39.691573   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:39.769452   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:39.769477   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:39.769493   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:39.017483   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:41.020027   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:37.651534   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:40.151404   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:40.401635   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:42.401931   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:44.401972   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:42.350823   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:42.367124   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:42.367199   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:42.407511   57719 cri.go:89] found id: ""
	I0410 22:51:42.407545   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.407554   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:42.407560   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:42.407622   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:42.442913   57719 cri.go:89] found id: ""
	I0410 22:51:42.442948   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.442958   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:42.442964   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:42.443027   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:42.480747   57719 cri.go:89] found id: ""
	I0410 22:51:42.480777   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.480786   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:42.480792   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:42.480846   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:42.521610   57719 cri.go:89] found id: ""
	I0410 22:51:42.521635   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.521644   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:42.521651   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:42.521698   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:42.561076   57719 cri.go:89] found id: ""
	I0410 22:51:42.561108   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.561119   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:42.561127   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:42.561189   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:42.598034   57719 cri.go:89] found id: ""
	I0410 22:51:42.598059   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.598066   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:42.598072   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:42.598129   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:42.637051   57719 cri.go:89] found id: ""
	I0410 22:51:42.637085   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.637095   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:42.637103   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:42.637162   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:42.676051   57719 cri.go:89] found id: ""
	I0410 22:51:42.676084   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.676094   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:42.676105   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:42.676120   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:42.719607   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:42.719634   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:42.770791   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:42.770829   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:42.785704   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:42.785730   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:42.876445   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:42.876475   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:42.876490   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:43.518453   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:46.019450   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:42.650486   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:44.650894   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:47.150370   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:46.901358   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:48.902417   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:45.458721   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:45.474125   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:45.474203   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:45.511105   57719 cri.go:89] found id: ""
	I0410 22:51:45.511143   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.511153   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:45.511161   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:45.511220   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:45.552891   57719 cri.go:89] found id: ""
	I0410 22:51:45.552916   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.552924   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:45.552930   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:45.552986   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:45.592423   57719 cri.go:89] found id: ""
	I0410 22:51:45.592458   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.592474   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:45.592481   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:45.592542   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:45.630964   57719 cri.go:89] found id: ""
	I0410 22:51:45.631009   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.631026   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:45.631033   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:45.631098   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:45.669557   57719 cri.go:89] found id: ""
	I0410 22:51:45.669586   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.669595   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:45.669602   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:45.669702   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:45.706359   57719 cri.go:89] found id: ""
	I0410 22:51:45.706387   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.706395   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:45.706402   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:45.706463   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:45.743301   57719 cri.go:89] found id: ""
	I0410 22:51:45.743330   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.743337   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:45.743343   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:45.743390   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:45.781679   57719 cri.go:89] found id: ""
	I0410 22:51:45.781703   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.781711   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:45.781718   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:45.781730   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:45.835251   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:45.835286   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:45.849255   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:45.849284   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:45.918404   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:45.918436   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:45.918452   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:45.999556   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:45.999591   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:48.546421   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:48.561243   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:48.561314   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:48.618335   57719 cri.go:89] found id: ""
	I0410 22:51:48.618361   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.618369   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:48.618375   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:48.618445   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:48.656116   57719 cri.go:89] found id: ""
	I0410 22:51:48.656151   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.656160   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:48.656167   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:48.656222   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:48.694846   57719 cri.go:89] found id: ""
	I0410 22:51:48.694874   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.694884   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:48.694897   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:48.694971   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:48.731988   57719 cri.go:89] found id: ""
	I0410 22:51:48.732020   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.732031   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:48.732039   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:48.732102   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:48.768595   57719 cri.go:89] found id: ""
	I0410 22:51:48.768627   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.768636   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:48.768643   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:48.768708   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:48.807263   57719 cri.go:89] found id: ""
	I0410 22:51:48.807292   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.807302   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:48.807308   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:48.807366   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:48.845291   57719 cri.go:89] found id: ""
	I0410 22:51:48.845317   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.845325   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:48.845329   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:48.845399   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:48.891056   57719 cri.go:89] found id: ""
	I0410 22:51:48.891081   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.891091   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:48.891102   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:48.891117   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:48.931963   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:48.931992   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:48.985539   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:48.985579   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:49.000685   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:49.000716   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:49.076097   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:49.076127   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:49.076143   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:48.517879   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:51.018479   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:49.150511   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:51.650519   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:51.400971   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:53.401596   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:51.663336   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:51.678249   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:51.678315   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:51.720062   57719 cri.go:89] found id: ""
	I0410 22:51:51.720088   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.720096   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:51.720103   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:51.720164   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:51.766351   57719 cri.go:89] found id: ""
	I0410 22:51:51.766387   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.766395   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:51.766401   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:51.766448   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:51.813037   57719 cri.go:89] found id: ""
	I0410 22:51:51.813068   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.813080   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:51.813087   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:51.813150   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:51.849232   57719 cri.go:89] found id: ""
	I0410 22:51:51.849262   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.849273   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:51.849280   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:51.849346   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:51.886392   57719 cri.go:89] found id: ""
	I0410 22:51:51.886415   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.886422   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:51.886428   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:51.886485   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:51.930859   57719 cri.go:89] found id: ""
	I0410 22:51:51.930896   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.930905   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:51.930913   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:51.930978   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:51.970403   57719 cri.go:89] found id: ""
	I0410 22:51:51.970501   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.970524   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:51.970533   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:51.970599   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:52.008281   57719 cri.go:89] found id: ""
	I0410 22:51:52.008311   57719 logs.go:276] 0 containers: []
	W0410 22:51:52.008322   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:52.008333   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:52.008347   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:52.060623   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:52.060656   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:52.075529   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:52.075559   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:52.158330   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:52.158356   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:52.158371   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:52.236356   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:52.236392   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:54.782448   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:54.796928   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:54.796997   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:54.836297   57719 cri.go:89] found id: ""
	I0410 22:51:54.836326   57719 logs.go:276] 0 containers: []
	W0410 22:51:54.836335   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:54.836341   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:54.836390   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:54.873501   57719 cri.go:89] found id: ""
	I0410 22:51:54.873532   57719 logs.go:276] 0 containers: []
	W0410 22:51:54.873540   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:54.873547   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:54.873617   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:54.914200   57719 cri.go:89] found id: ""
	I0410 22:51:54.914227   57719 logs.go:276] 0 containers: []
	W0410 22:51:54.914238   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:54.914247   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:54.914308   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:54.958654   57719 cri.go:89] found id: ""
	I0410 22:51:54.958682   57719 logs.go:276] 0 containers: []
	W0410 22:51:54.958693   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:54.958702   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:54.958761   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:55.017032   57719 cri.go:89] found id: ""
	I0410 22:51:55.017078   57719 logs.go:276] 0 containers: []
	W0410 22:51:55.017090   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:55.017101   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:55.017167   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:55.093024   57719 cri.go:89] found id: ""
	I0410 22:51:55.093059   57719 logs.go:276] 0 containers: []
	W0410 22:51:55.093070   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:55.093085   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:55.093156   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:55.142412   57719 cri.go:89] found id: ""
	I0410 22:51:55.142441   57719 logs.go:276] 0 containers: []
	W0410 22:51:55.142456   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:55.142464   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:55.142521   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:55.180116   57719 cri.go:89] found id: ""
	I0410 22:51:55.180147   57719 logs.go:276] 0 containers: []
	W0410 22:51:55.180159   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:55.180169   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:55.180186   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:55.249118   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:55.249139   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:55.249153   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:55.327558   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:55.327597   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:55.373127   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:55.373163   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:53.518589   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:56.017080   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:54.151372   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:56.650238   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:55.401716   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:57.902174   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:55.431602   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:55.431647   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:57.947559   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:57.962916   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:57.962983   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:58.000955   57719 cri.go:89] found id: ""
	I0410 22:51:58.000983   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.000990   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:58.000997   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:58.001049   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:58.040556   57719 cri.go:89] found id: ""
	I0410 22:51:58.040579   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.040586   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:58.040592   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:58.040649   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:58.079121   57719 cri.go:89] found id: ""
	I0410 22:51:58.079148   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.079155   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:58.079161   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:58.079240   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:58.119876   57719 cri.go:89] found id: ""
	I0410 22:51:58.119902   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.119914   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:58.119929   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:58.119987   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:58.160130   57719 cri.go:89] found id: ""
	I0410 22:51:58.160162   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.160173   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:58.160181   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:58.160258   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:58.198162   57719 cri.go:89] found id: ""
	I0410 22:51:58.198195   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.198207   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:58.198215   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:58.198266   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:58.235049   57719 cri.go:89] found id: ""
	I0410 22:51:58.235078   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.235089   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:58.235096   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:58.235157   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:58.275786   57719 cri.go:89] found id: ""
	I0410 22:51:58.275825   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.275845   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:58.275856   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:58.275872   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:58.316246   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:58.316277   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:58.371614   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:58.371649   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:58.386610   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:58.386646   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:58.465167   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:58.465187   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:58.465199   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:58.018362   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:00.517710   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:59.152119   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:01.650566   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:00.401148   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:02.401494   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:04.401624   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:01.049405   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:01.073251   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:01.073328   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:01.125169   57719 cri.go:89] found id: ""
	I0410 22:52:01.125201   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.125212   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:01.125220   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:01.125289   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:01.171256   57719 cri.go:89] found id: ""
	I0410 22:52:01.171289   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.171300   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:01.171308   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:01.171376   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:01.210444   57719 cri.go:89] found id: ""
	I0410 22:52:01.210478   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.210489   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:01.210503   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:01.210568   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:01.252448   57719 cri.go:89] found id: ""
	I0410 22:52:01.252473   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.252480   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:01.252486   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:01.252531   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:01.293084   57719 cri.go:89] found id: ""
	I0410 22:52:01.293117   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.293128   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:01.293136   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:01.293208   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:01.330992   57719 cri.go:89] found id: ""
	I0410 22:52:01.331019   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.331026   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:01.331032   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:01.331081   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:01.369286   57719 cri.go:89] found id: ""
	I0410 22:52:01.369315   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.369325   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:01.369331   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:01.369378   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:01.409888   57719 cri.go:89] found id: ""
	I0410 22:52:01.409916   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.409924   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:01.409933   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:01.409944   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:01.484535   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:01.484557   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:01.484569   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:01.565727   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:01.565778   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:01.606987   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:01.607018   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:01.659492   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:01.659529   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:04.174971   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:04.190302   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:04.190382   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:04.230050   57719 cri.go:89] found id: ""
	I0410 22:52:04.230080   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.230090   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:04.230097   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:04.230162   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:04.269870   57719 cri.go:89] found id: ""
	I0410 22:52:04.269902   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.269908   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:04.269914   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:04.269969   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:04.310977   57719 cri.go:89] found id: ""
	I0410 22:52:04.311008   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.311019   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:04.311026   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:04.311096   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:04.349108   57719 cri.go:89] found id: ""
	I0410 22:52:04.349136   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.349147   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:04.349154   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:04.349216   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:04.389590   57719 cri.go:89] found id: ""
	I0410 22:52:04.389613   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.389625   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:04.389633   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:04.389697   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:04.432962   57719 cri.go:89] found id: ""
	I0410 22:52:04.432989   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.433001   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:04.433008   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:04.433070   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:04.473912   57719 cri.go:89] found id: ""
	I0410 22:52:04.473946   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.473955   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:04.473960   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:04.474029   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:04.516157   57719 cri.go:89] found id: ""
	I0410 22:52:04.516182   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.516192   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:04.516203   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:04.516218   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:04.569047   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:04.569082   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:04.622639   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:04.622673   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:04.638441   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:04.638470   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:04.718203   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:04.718227   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:04.718241   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:02.518104   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:04.519509   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:06.519648   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:04.150041   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:06.150157   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:06.902111   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:08.902816   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:07.302147   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:07.315919   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:07.315984   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:07.354692   57719 cri.go:89] found id: ""
	I0410 22:52:07.354723   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.354733   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:07.354740   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:07.354803   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:07.393418   57719 cri.go:89] found id: ""
	I0410 22:52:07.393447   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.393459   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:07.393466   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:07.393525   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:07.436810   57719 cri.go:89] found id: ""
	I0410 22:52:07.436837   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.436847   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:07.436855   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:07.436920   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:07.478685   57719 cri.go:89] found id: ""
	I0410 22:52:07.478709   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.478720   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:07.478735   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:07.478792   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:07.515699   57719 cri.go:89] found id: ""
	I0410 22:52:07.515727   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.515737   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:07.515744   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:07.515805   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:07.556419   57719 cri.go:89] found id: ""
	I0410 22:52:07.556443   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.556451   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:07.556457   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:07.556560   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:07.598076   57719 cri.go:89] found id: ""
	I0410 22:52:07.598106   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.598113   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:07.598119   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:07.598183   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:07.637778   57719 cri.go:89] found id: ""
	I0410 22:52:07.637814   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.637826   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:07.637839   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:07.637854   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:07.693688   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:07.693728   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:07.709256   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:07.709289   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:07.778519   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:07.778544   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:07.778584   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:07.858937   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:07.858973   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:10.405765   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:10.422019   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:10.422083   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:09.017771   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:11.017883   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:08.151568   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:10.650989   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:11.402181   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:13.902520   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:10.463779   57719 cri.go:89] found id: ""
	I0410 22:52:10.463818   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.463829   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:10.463836   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:10.463923   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:10.503680   57719 cri.go:89] found id: ""
	I0410 22:52:10.503710   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.503718   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:10.503736   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:10.503804   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:10.545567   57719 cri.go:89] found id: ""
	I0410 22:52:10.545594   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.545605   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:10.545613   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:10.545671   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:10.590864   57719 cri.go:89] found id: ""
	I0410 22:52:10.590892   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.590901   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:10.590908   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:10.590968   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:10.634628   57719 cri.go:89] found id: ""
	I0410 22:52:10.634659   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.634670   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:10.634677   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:10.634758   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:10.681477   57719 cri.go:89] found id: ""
	I0410 22:52:10.681507   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.681526   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:10.681533   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:10.681585   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:10.725203   57719 cri.go:89] found id: ""
	I0410 22:52:10.725229   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.725328   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:10.725368   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:10.725443   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:10.764994   57719 cri.go:89] found id: ""
	I0410 22:52:10.765028   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.765036   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:10.765044   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:10.765094   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:10.808981   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:10.809012   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:10.866429   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:10.866468   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:10.882512   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:10.882537   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:10.963016   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:10.963041   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:10.963053   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:13.544552   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:13.558161   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:13.558238   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:13.596945   57719 cri.go:89] found id: ""
	I0410 22:52:13.596977   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.596988   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:13.596996   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:13.597057   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:13.637920   57719 cri.go:89] found id: ""
	I0410 22:52:13.637944   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.637951   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:13.637958   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:13.638012   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:13.676777   57719 cri.go:89] found id: ""
	I0410 22:52:13.676808   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.676819   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:13.676826   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:13.676887   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:13.714054   57719 cri.go:89] found id: ""
	I0410 22:52:13.714078   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.714086   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:13.714091   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:13.714142   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:13.757162   57719 cri.go:89] found id: ""
	I0410 22:52:13.757194   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.757206   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:13.757214   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:13.757276   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:13.793578   57719 cri.go:89] found id: ""
	I0410 22:52:13.793616   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.793629   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:13.793636   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:13.793697   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:13.831307   57719 cri.go:89] found id: ""
	I0410 22:52:13.831336   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.831346   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:13.831353   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:13.831400   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:13.872072   57719 cri.go:89] found id: ""
	I0410 22:52:13.872109   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.872117   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:13.872127   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:13.872143   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:13.926909   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:13.926947   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:13.943095   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:13.943126   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:14.015301   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:14.015336   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:14.015351   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:14.101100   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:14.101137   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:13.019599   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:15.517932   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:13.150248   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:15.650269   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:16.401396   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:18.402384   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:16.650213   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:16.664603   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:16.664677   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:16.701498   57719 cri.go:89] found id: ""
	I0410 22:52:16.701527   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.701539   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:16.701547   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:16.701618   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:16.740687   57719 cri.go:89] found id: ""
	I0410 22:52:16.740716   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.740725   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:16.740730   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:16.740789   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:16.777349   57719 cri.go:89] found id: ""
	I0410 22:52:16.777372   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.777380   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:16.777385   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:16.777454   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:16.819855   57719 cri.go:89] found id: ""
	I0410 22:52:16.819890   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.819900   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:16.819909   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:16.819973   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:16.859939   57719 cri.go:89] found id: ""
	I0410 22:52:16.859970   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.859981   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:16.859991   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:16.860056   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:16.897861   57719 cri.go:89] found id: ""
	I0410 22:52:16.897886   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.897893   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:16.897899   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:16.897962   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:16.935642   57719 cri.go:89] found id: ""
	I0410 22:52:16.935673   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.935681   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:16.935687   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:16.935733   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:16.974268   57719 cri.go:89] found id: ""
	I0410 22:52:16.974294   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.974302   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:16.974311   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:16.974327   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:17.027850   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:17.027888   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:17.043343   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:17.043379   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:17.120945   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:17.120967   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:17.120979   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:17.204831   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:17.204868   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:19.749712   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:19.764102   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:19.764181   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:19.800759   57719 cri.go:89] found id: ""
	I0410 22:52:19.800787   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.800795   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:19.800801   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:19.800851   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:19.839678   57719 cri.go:89] found id: ""
	I0410 22:52:19.839711   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.839723   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:19.839730   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:19.839791   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:19.876983   57719 cri.go:89] found id: ""
	I0410 22:52:19.877007   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.877015   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:19.877020   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:19.877081   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:19.918139   57719 cri.go:89] found id: ""
	I0410 22:52:19.918167   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.918177   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:19.918186   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:19.918243   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:19.954770   57719 cri.go:89] found id: ""
	I0410 22:52:19.954808   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.954818   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:19.954825   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:19.954881   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:19.993643   57719 cri.go:89] found id: ""
	I0410 22:52:19.993670   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.993680   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:19.993687   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:19.993746   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:20.030466   57719 cri.go:89] found id: ""
	I0410 22:52:20.030494   57719 logs.go:276] 0 containers: []
	W0410 22:52:20.030503   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:20.030510   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:20.030575   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:20.069264   57719 cri.go:89] found id: ""
	I0410 22:52:20.069291   57719 logs.go:276] 0 containers: []
	W0410 22:52:20.069299   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:20.069307   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:20.069318   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:20.117354   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:20.117382   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:20.170758   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:20.170800   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:20.187014   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:20.187055   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:20.269620   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:20.269645   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:20.269661   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:17.518440   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:20.018602   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:18.151102   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:20.151664   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:20.901836   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:23.401655   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:22.844841   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:22.861923   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:22.861983   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:22.907972   57719 cri.go:89] found id: ""
	I0410 22:52:22.908000   57719 logs.go:276] 0 containers: []
	W0410 22:52:22.908010   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:22.908017   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:22.908081   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:22.949822   57719 cri.go:89] found id: ""
	I0410 22:52:22.949851   57719 logs.go:276] 0 containers: []
	W0410 22:52:22.949861   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:22.949869   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:22.949935   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:22.989872   57719 cri.go:89] found id: ""
	I0410 22:52:22.989895   57719 logs.go:276] 0 containers: []
	W0410 22:52:22.989902   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:22.989908   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:22.989959   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:23.031881   57719 cri.go:89] found id: ""
	I0410 22:52:23.031900   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.031908   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:23.031913   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:23.031978   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:23.071691   57719 cri.go:89] found id: ""
	I0410 22:52:23.071719   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.071726   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:23.071732   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:23.071792   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:23.109961   57719 cri.go:89] found id: ""
	I0410 22:52:23.109990   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.110001   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:23.110009   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:23.110069   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:23.152955   57719 cri.go:89] found id: ""
	I0410 22:52:23.152979   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.152986   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:23.152991   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:23.153054   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:23.191883   57719 cri.go:89] found id: ""
	I0410 22:52:23.191924   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.191935   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:23.191947   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:23.191959   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:23.232692   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:23.232731   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:23.283648   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:23.283684   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:23.297701   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:23.297729   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:23.381657   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:23.381673   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:23.381685   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:22.520899   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:25.016955   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:27.018541   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:22.650053   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:25.150370   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:25.402084   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:27.402670   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:25.961531   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:25.977539   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:25.977639   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:26.021844   57719 cri.go:89] found id: ""
	I0410 22:52:26.021875   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.021886   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:26.021893   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:26.021954   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:26.064286   57719 cri.go:89] found id: ""
	I0410 22:52:26.064316   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.064327   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:26.064335   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:26.064394   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:26.104381   57719 cri.go:89] found id: ""
	I0410 22:52:26.104426   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.104437   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:26.104445   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:26.104522   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:26.143382   57719 cri.go:89] found id: ""
	I0410 22:52:26.143407   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.143417   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:26.143424   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:26.143489   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:26.179609   57719 cri.go:89] found id: ""
	I0410 22:52:26.179635   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.179646   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:26.179652   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:26.179714   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:26.217660   57719 cri.go:89] found id: ""
	I0410 22:52:26.217689   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.217695   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:26.217701   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:26.217758   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:26.254914   57719 cri.go:89] found id: ""
	I0410 22:52:26.254946   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.254956   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:26.254963   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:26.255047   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:26.293738   57719 cri.go:89] found id: ""
	I0410 22:52:26.293769   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.293779   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:26.293790   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:26.293809   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:26.366700   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:26.366725   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:26.366741   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:26.445143   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:26.445183   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:26.493175   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:26.493203   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:26.554952   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:26.554992   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:29.072225   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:29.087075   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:29.087150   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:29.131314   57719 cri.go:89] found id: ""
	I0410 22:52:29.131345   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.131357   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:29.131365   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:29.131427   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:29.169263   57719 cri.go:89] found id: ""
	I0410 22:52:29.169289   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.169298   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:29.169304   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:29.169357   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:29.209535   57719 cri.go:89] found id: ""
	I0410 22:52:29.209559   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.209570   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:29.209575   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:29.209630   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:29.251172   57719 cri.go:89] found id: ""
	I0410 22:52:29.251225   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.251233   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:29.251238   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:29.251290   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:29.296142   57719 cri.go:89] found id: ""
	I0410 22:52:29.296169   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.296179   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:29.296185   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:29.296245   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:29.336910   57719 cri.go:89] found id: ""
	I0410 22:52:29.336933   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.336940   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:29.336946   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:29.337003   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:29.396332   57719 cri.go:89] found id: ""
	I0410 22:52:29.396371   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.396382   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:29.396390   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:29.396475   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:29.438301   57719 cri.go:89] found id: ""
	I0410 22:52:29.438332   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.438340   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:29.438348   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:29.438360   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:29.482687   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:29.482711   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:29.535115   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:29.535146   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:29.551736   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:29.551760   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:29.624162   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:29.624198   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:29.624213   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:29.517873   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:31.519737   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:27.650947   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:29.651296   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:32.150101   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:29.901370   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:31.902050   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:34.401849   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:32.204355   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:32.218239   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:32.218310   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:32.255412   57719 cri.go:89] found id: ""
	I0410 22:52:32.255440   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.255451   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:32.255458   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:32.255516   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:32.293553   57719 cri.go:89] found id: ""
	I0410 22:52:32.293580   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.293591   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:32.293604   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:32.293663   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:32.332814   57719 cri.go:89] found id: ""
	I0410 22:52:32.332846   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.332855   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:32.332862   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:32.332924   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:32.371312   57719 cri.go:89] found id: ""
	I0410 22:52:32.371347   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.371368   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:32.371376   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:32.371441   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:32.407630   57719 cri.go:89] found id: ""
	I0410 22:52:32.407652   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.407659   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:32.407664   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:32.407720   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:32.444878   57719 cri.go:89] found id: ""
	I0410 22:52:32.444904   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.444914   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:32.444923   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:32.444989   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:32.490540   57719 cri.go:89] found id: ""
	I0410 22:52:32.490567   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.490578   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:32.490586   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:32.490644   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:32.528911   57719 cri.go:89] found id: ""
	I0410 22:52:32.528953   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.528961   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:32.528969   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:32.528979   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:32.608601   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:32.608626   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:32.608641   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:32.684840   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:32.684876   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:32.728092   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:32.728132   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:32.778491   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:32.778524   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:35.296228   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:35.310615   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:35.310705   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:35.377585   57719 cri.go:89] found id: ""
	I0410 22:52:35.377612   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.377623   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:35.377632   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:35.377692   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:35.417734   57719 cri.go:89] found id: ""
	I0410 22:52:35.417775   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.417796   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:35.417803   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:35.417864   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:34.017119   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:36.017526   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:34.150859   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:36.151112   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:36.402036   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:38.402201   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:35.456256   57719 cri.go:89] found id: ""
	I0410 22:52:35.456281   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.456291   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:35.456298   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:35.456382   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:35.495233   57719 cri.go:89] found id: ""
	I0410 22:52:35.495257   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.495267   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:35.495274   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:35.495333   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:35.535239   57719 cri.go:89] found id: ""
	I0410 22:52:35.535273   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.535284   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:35.535292   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:35.535352   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:35.571601   57719 cri.go:89] found id: ""
	I0410 22:52:35.571628   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.571638   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:35.571645   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:35.571708   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:35.612008   57719 cri.go:89] found id: ""
	I0410 22:52:35.612036   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.612045   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:35.612051   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:35.612099   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:35.649029   57719 cri.go:89] found id: ""
	I0410 22:52:35.649057   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.649065   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:35.649073   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:35.649084   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:35.702630   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:35.702668   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:35.718404   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:35.718433   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:35.798380   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:35.798405   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:35.798420   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:35.874049   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:35.874085   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:38.416265   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:38.430921   57719 kubeadm.go:591] duration metric: took 4m3.090666464s to restartPrimaryControlPlane
	W0410 22:52:38.431006   57719 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0410 22:52:38.431030   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0410 22:52:41.138973   57719 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.707913754s)
	I0410 22:52:41.139063   57719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:52:41.155646   57719 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:52:41.166345   57719 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:52:41.176443   57719 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:52:41.176481   57719 kubeadm.go:156] found existing configuration files:
	
	I0410 22:52:41.176547   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:52:41.186887   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:52:41.186960   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:52:41.199740   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:52:41.209843   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:52:41.209901   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:52:41.219804   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:52:41.229739   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:52:41.229807   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:52:41.240127   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:52:41.249763   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:52:41.249824   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:52:41.260148   57719 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 22:52:41.334127   57719 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0410 22:52:41.334200   57719 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 22:52:41.506104   57719 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 22:52:41.506307   57719 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 22:52:41.506488   57719 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 22:52:41.715227   57719 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 22:52:38.519180   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:41.018674   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:38.649983   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:41.152610   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:41.717460   57719 out.go:204]   - Generating certificates and keys ...
	I0410 22:52:41.717564   57719 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 22:52:41.717654   57719 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 22:52:41.717781   57719 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0410 22:52:41.717898   57719 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0410 22:52:41.718004   57719 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0410 22:52:41.718099   57719 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0410 22:52:41.718203   57719 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0410 22:52:41.718550   57719 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0410 22:52:41.719083   57719 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0410 22:52:41.719413   57719 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0410 22:52:41.719571   57719 kubeadm.go:309] [certs] Using the existing "sa" key
	I0410 22:52:41.719675   57719 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 22:52:41.998202   57719 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 22:52:42.109508   57719 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 22:52:42.315545   57719 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 22:52:42.448910   57719 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 22:52:42.465903   57719 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 22:52:42.467312   57719 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 22:52:42.467387   57719 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 22:52:42.636790   57719 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 22:52:40.402237   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:42.404435   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:42.638969   57719 out.go:204]   - Booting up control plane ...
	I0410 22:52:42.639106   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 22:52:42.652152   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 22:52:42.653843   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 22:52:42.654719   57719 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 22:52:42.658006   57719 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0410 22:52:43.518416   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:46.017894   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:43.650778   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:46.149976   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:44.902059   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:46.902549   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:49.401695   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:48.517833   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:51.018924   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:48.150825   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:50.151391   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:51.901096   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:53.902619   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:53.518616   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:55.519254   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:52.649783   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:54.651766   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:56.655687   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:55.903916   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:58.400789   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:58.017685   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:00.517303   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:59.152346   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:01.651146   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:00.901531   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:03.400690   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:02.517569   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:04.517775   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:07.017655   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:03.651728   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:05.652505   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:05.901605   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:07.902363   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:09.018576   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:11.510820   58186 pod_ready.go:81] duration metric: took 4m0.000124062s for pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace to be "Ready" ...
	E0410 22:53:11.510861   58186 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0410 22:53:11.510885   58186 pod_ready.go:38] duration metric: took 4m10.548289153s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:53:11.510918   58186 kubeadm.go:591] duration metric: took 4m18.480793797s to restartPrimaryControlPlane
	W0410 22:53:11.510993   58186 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0410 22:53:11.511019   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0410 22:53:08.151155   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:10.151358   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:10.400722   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:12.401658   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:14.401745   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:12.652391   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:14.652682   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:17.149892   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:16.900482   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:18.900789   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:19.152154   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:21.649975   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:20.902068   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:23.401500   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:22.660165   57719 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0410 22:53:22.660260   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:53:22.660520   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:53:23.653457   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:26.149469   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:25.903070   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:28.400947   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:27.660705   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:53:27.660919   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:53:28.150895   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:30.650254   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:30.401054   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:32.401994   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:32.654427   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:35.149580   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:37.150506   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:37.150533   58701 pod_ready.go:81] duration metric: took 4m0.00757056s for pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace to be "Ready" ...
	E0410 22:53:37.150544   58701 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0410 22:53:37.150552   58701 pod_ready.go:38] duration metric: took 4m5.55870495s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:53:37.150570   58701 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:53:37.150602   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:53:37.150659   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:53:37.213472   58701 cri.go:89] found id: "74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:37.213499   58701 cri.go:89] found id: ""
	I0410 22:53:37.213511   58701 logs.go:276] 1 containers: [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c]
	I0410 22:53:37.213561   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.218928   58701 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:53:37.218997   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:53:37.260045   58701 cri.go:89] found id: "34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:37.260066   58701 cri.go:89] found id: ""
	I0410 22:53:37.260073   58701 logs.go:276] 1 containers: [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072]
	I0410 22:53:37.260116   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.265329   58701 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:53:37.265393   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:53:37.306649   58701 cri.go:89] found id: "d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:37.306674   58701 cri.go:89] found id: ""
	I0410 22:53:37.306682   58701 logs.go:276] 1 containers: [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3]
	I0410 22:53:37.306729   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.311163   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:53:37.311213   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:53:37.351855   58701 cri.go:89] found id: "b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:37.351883   58701 cri.go:89] found id: ""
	I0410 22:53:37.351890   58701 logs.go:276] 1 containers: [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14]
	I0410 22:53:37.351937   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.356427   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:53:37.356497   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:53:34.900998   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:36.901173   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:39.400680   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:37.661409   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:53:37.661698   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:53:37.399224   58701 cri.go:89] found id: "7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:37.399248   58701 cri.go:89] found id: ""
	I0410 22:53:37.399257   58701 logs.go:276] 1 containers: [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b]
	I0410 22:53:37.399315   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.404314   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:53:37.404380   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:53:37.444169   58701 cri.go:89] found id: "c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:37.444196   58701 cri.go:89] found id: ""
	I0410 22:53:37.444205   58701 logs.go:276] 1 containers: [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39]
	I0410 22:53:37.444264   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.448618   58701 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:53:37.448693   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:53:37.487481   58701 cri.go:89] found id: ""
	I0410 22:53:37.487507   58701 logs.go:276] 0 containers: []
	W0410 22:53:37.487514   58701 logs.go:278] No container was found matching "kindnet"
	I0410 22:53:37.487519   58701 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0410 22:53:37.487566   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0410 22:53:37.531000   58701 cri.go:89] found id: "3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:37.531018   58701 cri.go:89] found id: "912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:37.531022   58701 cri.go:89] found id: ""
	I0410 22:53:37.531029   58701 logs.go:276] 2 containers: [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7]
	I0410 22:53:37.531081   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.535679   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.539974   58701 logs.go:123] Gathering logs for kubelet ...
	I0410 22:53:37.539998   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:53:37.601043   58701 logs.go:123] Gathering logs for dmesg ...
	I0410 22:53:37.601086   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:53:37.616427   58701 logs.go:123] Gathering logs for kube-apiserver [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c] ...
	I0410 22:53:37.616458   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:37.669951   58701 logs.go:123] Gathering logs for etcd [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072] ...
	I0410 22:53:37.669983   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:37.716243   58701 logs.go:123] Gathering logs for kube-controller-manager [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39] ...
	I0410 22:53:37.716273   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:37.774644   58701 logs.go:123] Gathering logs for storage-provisioner [912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7] ...
	I0410 22:53:37.774678   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:37.821033   58701 logs.go:123] Gathering logs for container status ...
	I0410 22:53:37.821077   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:53:37.883644   58701 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:53:37.883678   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0410 22:53:38.019289   58701 logs.go:123] Gathering logs for coredns [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3] ...
	I0410 22:53:38.019320   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:38.057708   58701 logs.go:123] Gathering logs for kube-scheduler [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14] ...
	I0410 22:53:38.057739   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:38.100119   58701 logs.go:123] Gathering logs for kube-proxy [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b] ...
	I0410 22:53:38.100149   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:38.143845   58701 logs.go:123] Gathering logs for storage-provisioner [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195] ...
	I0410 22:53:38.143875   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:38.186718   58701 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:53:38.186749   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:53:41.168951   58701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:53:41.186828   58701 api_server.go:72] duration metric: took 4m17.343179611s to wait for apiserver process to appear ...
	I0410 22:53:41.186866   58701 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:53:41.186911   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:53:41.186972   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:53:41.228167   58701 cri.go:89] found id: "74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:41.228194   58701 cri.go:89] found id: ""
	I0410 22:53:41.228201   58701 logs.go:276] 1 containers: [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c]
	I0410 22:53:41.228251   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.232754   58701 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:53:41.232812   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:53:41.271497   58701 cri.go:89] found id: "34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:41.271519   58701 cri.go:89] found id: ""
	I0410 22:53:41.271527   58701 logs.go:276] 1 containers: [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072]
	I0410 22:53:41.271575   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.276165   58701 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:53:41.276234   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:53:41.319164   58701 cri.go:89] found id: "d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:41.319187   58701 cri.go:89] found id: ""
	I0410 22:53:41.319195   58701 logs.go:276] 1 containers: [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3]
	I0410 22:53:41.319251   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.323627   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:53:41.323696   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:53:41.366648   58701 cri.go:89] found id: "b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:41.366671   58701 cri.go:89] found id: ""
	I0410 22:53:41.366678   58701 logs.go:276] 1 containers: [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14]
	I0410 22:53:41.366733   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.371132   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:53:41.371197   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:53:41.412956   58701 cri.go:89] found id: "7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:41.412974   58701 cri.go:89] found id: ""
	I0410 22:53:41.412982   58701 logs.go:276] 1 containers: [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b]
	I0410 22:53:41.413034   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.417441   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:53:41.417495   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:53:41.460008   58701 cri.go:89] found id: "c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:41.460037   58701 cri.go:89] found id: ""
	I0410 22:53:41.460048   58701 logs.go:276] 1 containers: [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39]
	I0410 22:53:41.460105   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.464422   58701 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:53:41.464492   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:53:41.504095   58701 cri.go:89] found id: ""
	I0410 22:53:41.504126   58701 logs.go:276] 0 containers: []
	W0410 22:53:41.504134   58701 logs.go:278] No container was found matching "kindnet"
	I0410 22:53:41.504140   58701 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0410 22:53:41.504199   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0410 22:53:41.543443   58701 cri.go:89] found id: "3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:41.543467   58701 cri.go:89] found id: "912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:41.543473   58701 cri.go:89] found id: ""
	I0410 22:53:41.543481   58701 logs.go:276] 2 containers: [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7]
	I0410 22:53:41.543540   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.548182   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.552917   58701 logs.go:123] Gathering logs for kube-proxy [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b] ...
	I0410 22:53:41.552941   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:41.601620   58701 logs.go:123] Gathering logs for kube-controller-manager [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39] ...
	I0410 22:53:41.601652   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:41.653090   58701 logs.go:123] Gathering logs for storage-provisioner [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195] ...
	I0410 22:53:41.653124   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:41.692683   58701 logs.go:123] Gathering logs for storage-provisioner [912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7] ...
	I0410 22:53:41.692711   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:41.736312   58701 logs.go:123] Gathering logs for dmesg ...
	I0410 22:53:41.736353   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:53:41.753242   58701 logs.go:123] Gathering logs for kube-apiserver [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c] ...
	I0410 22:53:41.753283   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:41.812881   58701 logs.go:123] Gathering logs for etcd [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072] ...
	I0410 22:53:41.812910   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:41.860686   58701 logs.go:123] Gathering logs for coredns [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3] ...
	I0410 22:53:41.860714   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:41.902523   58701 logs.go:123] Gathering logs for container status ...
	I0410 22:53:41.902546   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:53:41.945812   58701 logs.go:123] Gathering logs for kubelet ...
	I0410 22:53:41.945848   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:53:42.001012   58701 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:53:42.001046   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0410 22:53:42.123971   58701 logs.go:123] Gathering logs for kube-scheduler [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14] ...
	I0410 22:53:42.124000   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:42.168773   58701 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:53:42.168806   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:53:41.405604   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:43.901172   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:43.595677   58186 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.084634816s)
	I0410 22:53:43.595765   58186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:53:43.613470   58186 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:53:43.624876   58186 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:53:43.638564   58186 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:53:43.638592   58186 kubeadm.go:156] found existing configuration files:
	
	I0410 22:53:43.638641   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:53:43.652554   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:53:43.652608   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:53:43.664263   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:53:43.674443   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:53:43.674497   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:53:43.695444   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:53:43.705446   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:53:43.705518   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:53:43.716451   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:53:43.726343   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:53:43.726407   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:53:43.736859   58186 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 22:53:43.957994   58186 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 22:53:45.115742   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:53:45.120239   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 200:
	ok
	I0410 22:53:45.121662   58701 api_server.go:141] control plane version: v1.29.3
	I0410 22:53:45.121690   58701 api_server.go:131] duration metric: took 3.934815447s to wait for apiserver health ...
	I0410 22:53:45.121699   58701 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:53:45.121727   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:53:45.121780   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:53:45.172291   58701 cri.go:89] found id: "74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:45.172315   58701 cri.go:89] found id: ""
	I0410 22:53:45.172324   58701 logs.go:276] 1 containers: [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c]
	I0410 22:53:45.172382   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.177041   58701 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:53:45.177103   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:53:45.213853   58701 cri.go:89] found id: "34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:45.213880   58701 cri.go:89] found id: ""
	I0410 22:53:45.213889   58701 logs.go:276] 1 containers: [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072]
	I0410 22:53:45.213944   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.218478   58701 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:53:45.218546   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:53:45.268753   58701 cri.go:89] found id: "d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:45.268779   58701 cri.go:89] found id: ""
	I0410 22:53:45.268792   58701 logs.go:276] 1 containers: [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3]
	I0410 22:53:45.268843   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.273223   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:53:45.273291   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:53:45.314032   58701 cri.go:89] found id: "b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:45.314057   58701 cri.go:89] found id: ""
	I0410 22:53:45.314066   58701 logs.go:276] 1 containers: [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14]
	I0410 22:53:45.314115   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.318671   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:53:45.318740   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:53:45.356139   58701 cri.go:89] found id: "7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:45.356167   58701 cri.go:89] found id: ""
	I0410 22:53:45.356177   58701 logs.go:276] 1 containers: [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b]
	I0410 22:53:45.356234   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.361449   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:53:45.361520   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:53:45.405153   58701 cri.go:89] found id: "c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:45.405174   58701 cri.go:89] found id: ""
	I0410 22:53:45.405181   58701 logs.go:276] 1 containers: [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39]
	I0410 22:53:45.405230   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.409795   58701 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:53:45.409871   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:53:45.451984   58701 cri.go:89] found id: ""
	I0410 22:53:45.452016   58701 logs.go:276] 0 containers: []
	W0410 22:53:45.452026   58701 logs.go:278] No container was found matching "kindnet"
	I0410 22:53:45.452034   58701 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0410 22:53:45.452095   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0410 22:53:45.491612   58701 cri.go:89] found id: "3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:45.491650   58701 cri.go:89] found id: "912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:45.491656   58701 cri.go:89] found id: ""
	I0410 22:53:45.491665   58701 logs.go:276] 2 containers: [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7]
	I0410 22:53:45.491724   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.496253   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.500723   58701 logs.go:123] Gathering logs for kube-proxy [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b] ...
	I0410 22:53:45.500751   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:45.557083   58701 logs.go:123] Gathering logs for kube-controller-manager [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39] ...
	I0410 22:53:45.557118   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:45.616768   58701 logs.go:123] Gathering logs for storage-provisioner [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195] ...
	I0410 22:53:45.616804   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:45.664097   58701 logs.go:123] Gathering logs for storage-provisioner [912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7] ...
	I0410 22:53:45.664133   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:45.707920   58701 logs.go:123] Gathering logs for container status ...
	I0410 22:53:45.707957   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:53:45.751862   58701 logs.go:123] Gathering logs for kube-apiserver [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c] ...
	I0410 22:53:45.751898   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:45.806584   58701 logs.go:123] Gathering logs for coredns [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3] ...
	I0410 22:53:45.806619   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:45.846145   58701 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:53:45.846170   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0410 22:53:45.970766   58701 logs.go:123] Gathering logs for etcd [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072] ...
	I0410 22:53:45.970796   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:46.024049   58701 logs.go:123] Gathering logs for kube-scheduler [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14] ...
	I0410 22:53:46.024081   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:46.067009   58701 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:53:46.067048   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:53:46.462765   58701 logs.go:123] Gathering logs for kubelet ...
	I0410 22:53:46.462812   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:53:46.520007   58701 logs.go:123] Gathering logs for dmesg ...
	I0410 22:53:46.520049   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:53:49.047137   58701 system_pods.go:59] 8 kube-system pods found
	I0410 22:53:49.047166   58701 system_pods.go:61] "coredns-76f75df574-ghnvx" [88ebd9b0-ecf0-4037-b5b0-547dad2354ba] Running
	I0410 22:53:49.047170   58701 system_pods.go:61] "etcd-default-k8s-diff-port-519831" [e03c3075-b377-41a8-8f68-5a424fafd6a1] Running
	I0410 22:53:49.047174   58701 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-519831" [a538137b-08ce-4feb-a420-2e3ad7125b14] Running
	I0410 22:53:49.047177   58701 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-519831" [05d154e2-69cf-40dc-a4ba-ed0d65be4365] Running
	I0410 22:53:49.047180   58701 system_pods.go:61] "kube-proxy-5mbwx" [44724487-9539-4079-9fd6-40cb70208b95] Running
	I0410 22:53:49.047183   58701 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-519831" [a587ef20-140c-40b0-b306-d5f5c595f4a6] Running
	I0410 22:53:49.047189   58701 system_pods.go:61] "metrics-server-57f55c9bc5-9l2hc" [2f5cda2f-4d8f-4798-954e-5ef588f2b88f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:53:49.047192   58701 system_pods.go:61] "storage-provisioner" [e4e09f42-54ba-480e-a020-1ca071a54558] Running
	I0410 22:53:49.047201   58701 system_pods.go:74] duration metric: took 3.925495812s to wait for pod list to return data ...
	I0410 22:53:49.047208   58701 default_sa.go:34] waiting for default service account to be created ...
	I0410 22:53:49.050341   58701 default_sa.go:45] found service account: "default"
	I0410 22:53:49.050363   58701 default_sa.go:55] duration metric: took 3.148222ms for default service account to be created ...
	I0410 22:53:49.050371   58701 system_pods.go:116] waiting for k8s-apps to be running ...
	I0410 22:53:49.056364   58701 system_pods.go:86] 8 kube-system pods found
	I0410 22:53:49.056390   58701 system_pods.go:89] "coredns-76f75df574-ghnvx" [88ebd9b0-ecf0-4037-b5b0-547dad2354ba] Running
	I0410 22:53:49.056414   58701 system_pods.go:89] "etcd-default-k8s-diff-port-519831" [e03c3075-b377-41a8-8f68-5a424fafd6a1] Running
	I0410 22:53:49.056423   58701 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-519831" [a538137b-08ce-4feb-a420-2e3ad7125b14] Running
	I0410 22:53:49.056431   58701 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-519831" [05d154e2-69cf-40dc-a4ba-ed0d65be4365] Running
	I0410 22:53:49.056437   58701 system_pods.go:89] "kube-proxy-5mbwx" [44724487-9539-4079-9fd6-40cb70208b95] Running
	I0410 22:53:49.056444   58701 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-519831" [a587ef20-140c-40b0-b306-d5f5c595f4a6] Running
	I0410 22:53:49.056455   58701 system_pods.go:89] "metrics-server-57f55c9bc5-9l2hc" [2f5cda2f-4d8f-4798-954e-5ef588f2b88f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:53:49.056462   58701 system_pods.go:89] "storage-provisioner" [e4e09f42-54ba-480e-a020-1ca071a54558] Running
	I0410 22:53:49.056475   58701 system_pods.go:126] duration metric: took 6.097239ms to wait for k8s-apps to be running ...
	I0410 22:53:49.056492   58701 system_svc.go:44] waiting for kubelet service to be running ....
	I0410 22:53:49.056537   58701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:53:49.077239   58701 system_svc.go:56] duration metric: took 20.737127ms WaitForService to wait for kubelet
	I0410 22:53:49.077269   58701 kubeadm.go:576] duration metric: took 4m25.233626302s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 22:53:49.077297   58701 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:53:49.080463   58701 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:53:49.080486   58701 node_conditions.go:123] node cpu capacity is 2
	I0410 22:53:49.080497   58701 node_conditions.go:105] duration metric: took 3.195662ms to run NodePressure ...
	I0410 22:53:49.080508   58701 start.go:240] waiting for startup goroutines ...
	I0410 22:53:49.080515   58701 start.go:245] waiting for cluster config update ...
	I0410 22:53:49.080525   58701 start.go:254] writing updated cluster config ...
	I0410 22:53:49.080805   58701 ssh_runner.go:195] Run: rm -f paused
	I0410 22:53:49.141489   58701 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0410 22:53:49.143597   58701 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-519831" cluster and "default" namespace by default
	I0410 22:53:45.903632   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:48.403981   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:53.064071   58186 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0410 22:53:53.064154   58186 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 22:53:53.064260   58186 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 22:53:53.064429   58186 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 22:53:53.064574   58186 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 22:53:53.064670   58186 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 22:53:53.066595   58186 out.go:204]   - Generating certificates and keys ...
	I0410 22:53:53.066703   58186 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 22:53:53.066808   58186 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 22:53:53.066929   58186 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0410 22:53:53.067023   58186 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0410 22:53:53.067155   58186 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0410 22:53:53.067235   58186 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0410 22:53:53.067329   58186 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0410 22:53:53.067433   58186 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0410 22:53:53.067546   58186 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0410 22:53:53.067655   58186 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0410 22:53:53.067733   58186 kubeadm.go:309] [certs] Using the existing "sa" key
	I0410 22:53:53.067890   58186 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 22:53:53.067961   58186 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 22:53:53.068049   58186 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0410 22:53:53.068132   58186 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 22:53:53.068232   58186 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 22:53:53.068310   58186 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 22:53:53.068379   58186 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 22:53:53.068510   58186 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 22:53:53.070126   58186 out.go:204]   - Booting up control plane ...
	I0410 22:53:53.070219   58186 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 22:53:53.070324   58186 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 22:53:53.070425   58186 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 22:53:53.070565   58186 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 22:53:53.070686   58186 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 22:53:53.070748   58186 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 22:53:53.070973   58186 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0410 22:53:53.071083   58186 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002820 seconds
	I0410 22:53:53.071249   58186 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0410 22:53:53.071424   58186 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0410 22:53:53.071485   58186 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0410 22:53:53.071624   58186 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-706500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0410 22:53:53.071680   58186 kubeadm.go:309] [bootstrap-token] Using token: 0wvld6.jntz9ft9bn5g46le
	I0410 22:53:53.073567   58186 out.go:204]   - Configuring RBAC rules ...
	I0410 22:53:53.073708   58186 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0410 22:53:53.073819   58186 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0410 22:53:53.074015   58186 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0410 22:53:53.074206   58186 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0410 22:53:53.074370   58186 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0410 22:53:53.074548   58186 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0410 22:53:53.074726   58186 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0410 22:53:53.074798   58186 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0410 22:53:53.074873   58186 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0410 22:53:53.074884   58186 kubeadm.go:309] 
	I0410 22:53:53.074956   58186 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0410 22:53:53.074978   58186 kubeadm.go:309] 
	I0410 22:53:53.075077   58186 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0410 22:53:53.075088   58186 kubeadm.go:309] 
	I0410 22:53:53.075119   58186 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0410 22:53:53.075191   58186 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0410 22:53:53.075262   58186 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0410 22:53:53.075273   58186 kubeadm.go:309] 
	I0410 22:53:53.075337   58186 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0410 22:53:53.075353   58186 kubeadm.go:309] 
	I0410 22:53:53.075419   58186 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0410 22:53:53.075437   58186 kubeadm.go:309] 
	I0410 22:53:53.075503   58186 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0410 22:53:53.075621   58186 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0410 22:53:53.075714   58186 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0410 22:53:53.075724   58186 kubeadm.go:309] 
	I0410 22:53:53.075829   58186 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0410 22:53:53.075936   58186 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0410 22:53:53.075953   58186 kubeadm.go:309] 
	I0410 22:53:53.076058   58186 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 0wvld6.jntz9ft9bn5g46le \
	I0410 22:53:53.076196   58186 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f23510b5ac28f8f42919becc6f4945068a05a3fcf79470ca514b0af3de7a2bb0 \
	I0410 22:53:53.076253   58186 kubeadm.go:309] 	--control-plane 
	I0410 22:53:53.076270   58186 kubeadm.go:309] 
	I0410 22:53:53.076387   58186 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0410 22:53:53.076422   58186 kubeadm.go:309] 
	I0410 22:53:53.076516   58186 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 0wvld6.jntz9ft9bn5g46le \
	I0410 22:53:53.076661   58186 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f23510b5ac28f8f42919becc6f4945068a05a3fcf79470ca514b0af3de7a2bb0 
	I0410 22:53:53.076711   58186 cni.go:84] Creating CNI manager for ""
	I0410 22:53:53.076726   58186 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:53:53.078503   58186 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0410 22:53:50.902397   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:53.403449   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:53.079631   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0410 22:53:53.132043   58186 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0410 22:53:53.167760   58186 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0410 22:53:53.167847   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:53.167870   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-706500 minikube.k8s.io/updated_at=2024_04_10T22_53_53_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2 minikube.k8s.io/name=embed-certs-706500 minikube.k8s.io/primary=true
	I0410 22:53:53.511359   58186 ops.go:34] apiserver oom_adj: -16
	I0410 22:53:53.511506   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:54.012080   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:54.511816   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:55.011883   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:55.511809   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:56.011572   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:56.512114   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:57.011878   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:55.900548   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:57.901541   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:57.662444   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:53:57.662687   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:53:57.511726   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:58.011563   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:58.512617   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:59.012145   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:59.512448   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:00.012278   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:00.512290   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:01.012507   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:01.512415   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:02.011660   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:00.401622   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:54:02.902558   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:54:02.511581   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:03.012326   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:03.512539   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:04.012085   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:04.512496   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:05.011911   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:05.512180   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:05.619801   58186 kubeadm.go:1107] duration metric: took 12.452015223s to wait for elevateKubeSystemPrivileges
	W0410 22:54:05.619839   58186 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0410 22:54:05.619847   58186 kubeadm.go:393] duration metric: took 5m12.640298551s to StartCluster
	I0410 22:54:05.619862   58186 settings.go:142] acquiring lock: {Name:mk5dc8e9a07a91433645b19ffba859d70c73be71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:54:05.619936   58186 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:54:05.621989   58186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/kubeconfig: {Name:mkd6f498bec10d7b0d2291f83f5e27766227bc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:54:05.622331   58186 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0410 22:54:05.624233   58186 out.go:177] * Verifying Kubernetes components...
	I0410 22:54:05.622444   58186 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0410 22:54:05.622516   58186 config.go:182] Loaded profile config "embed-certs-706500": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:54:05.625850   58186 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-706500"
	I0410 22:54:05.625872   58186 addons.go:69] Setting default-storageclass=true in profile "embed-certs-706500"
	I0410 22:54:05.625882   58186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:54:05.625893   58186 addons.go:69] Setting metrics-server=true in profile "embed-certs-706500"
	I0410 22:54:05.625924   58186 addons.go:234] Setting addon metrics-server=true in "embed-certs-706500"
	W0410 22:54:05.625930   58186 addons.go:243] addon metrics-server should already be in state true
	I0410 22:54:05.625954   58186 host.go:66] Checking if "embed-certs-706500" exists ...
	I0410 22:54:05.625888   58186 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-706500"
	I0410 22:54:05.625903   58186 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-706500"
	W0410 22:54:05.625982   58186 addons.go:243] addon storage-provisioner should already be in state true
	I0410 22:54:05.626012   58186 host.go:66] Checking if "embed-certs-706500" exists ...
	I0410 22:54:05.626365   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.626407   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.626421   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.626440   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.626441   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.626442   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.643647   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44787
	I0410 22:54:05.643758   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41863
	I0410 22:54:05.644070   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45225
	I0410 22:54:05.644101   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.644253   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.644825   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.644856   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.644825   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.644883   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.644915   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.645239   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.645419   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetState
	I0410 22:54:05.645475   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.645489   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.645501   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.646021   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.646035   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.646062   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.646588   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.646619   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.648242   58186 addons.go:234] Setting addon default-storageclass=true in "embed-certs-706500"
	W0410 22:54:05.648261   58186 addons.go:243] addon default-storageclass should already be in state true
	I0410 22:54:05.648282   58186 host.go:66] Checking if "embed-certs-706500" exists ...
	I0410 22:54:05.648555   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.648582   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.661773   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37117
	I0410 22:54:05.662556   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.663049   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.663073   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.663474   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.663708   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetState
	I0410 22:54:05.664716   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44297
	I0410 22:54:05.665027   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.665617   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.665634   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.665706   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46269
	I0410 22:54:05.666342   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.666343   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:54:05.665946   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.668790   58186 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:54:05.667015   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.667244   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.670336   58186 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:54:05.670357   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0410 22:54:05.670374   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:54:05.668826   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.668843   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.671350   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.671633   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetState
	I0410 22:54:05.673653   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:54:05.675310   58186 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0410 22:54:05.674011   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.674533   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:54:05.676671   58186 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0410 22:54:05.676677   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:54:05.676690   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0410 22:54:05.676710   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:54:05.676713   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.676821   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:54:05.676976   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:54:05.677117   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:54:05.680146   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.680927   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:54:05.680964   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.681136   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:54:05.681515   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:54:05.681681   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:54:05.681834   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:54:05.688424   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43155
	I0410 22:54:05.688861   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.689299   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.689320   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.689589   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.689741   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetState
	I0410 22:54:05.691090   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:54:05.691335   58186 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0410 22:54:05.691353   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0410 22:54:05.691369   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:54:05.694552   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.695080   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:54:05.695118   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.695426   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:54:05.695771   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:54:05.695939   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:54:05.696084   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:54:05.860032   58186 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:54:05.881036   58186 node_ready.go:35] waiting up to 6m0s for node "embed-certs-706500" to be "Ready" ...
	I0410 22:54:05.891218   58186 node_ready.go:49] node "embed-certs-706500" has status "Ready":"True"
	I0410 22:54:05.891237   58186 node_ready.go:38] duration metric: took 10.166143ms for node "embed-certs-706500" to be "Ready" ...
	I0410 22:54:05.891247   58186 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:54:05.899013   58186 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-bvdp5" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:06.064031   58186 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0410 22:54:06.064051   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0410 22:54:06.065727   58186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:54:06.075127   58186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0410 22:54:06.140574   58186 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0410 22:54:06.140607   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0410 22:54:06.216389   58186 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:54:06.216428   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0410 22:54:06.356117   58186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:54:07.409983   58186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.334826611s)
	I0410 22:54:07.410039   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.410052   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.410103   58186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.344342448s)
	I0410 22:54:07.410184   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.410199   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.410313   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Closing plugin on server side
	I0410 22:54:07.410321   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.410362   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.410371   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.410382   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.410452   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.410505   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.410519   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.410531   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.410465   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Closing plugin on server side
	I0410 22:54:07.410678   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.410765   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.410802   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.410820   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.410822   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Closing plugin on server side
	I0410 22:54:07.438723   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.438742   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.439085   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.439104   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.439085   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Closing plugin on server side
	I0410 22:54:07.738187   58186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.382031326s)
	I0410 22:54:07.738252   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.738267   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.738556   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.738586   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.738597   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.738604   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.738865   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.738885   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.738908   58186 addons.go:470] Verifying addon metrics-server=true in "embed-certs-706500"
	I0410 22:54:07.741639   58186 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0410 22:54:05.403374   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:54:07.903041   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:54:08.895154   57270 pod_ready.go:81] duration metric: took 4m0.000708165s for pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace to be "Ready" ...
	E0410 22:54:08.895186   57270 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace to be "Ready" (will not retry!)
	I0410 22:54:08.895214   57270 pod_ready.go:38] duration metric: took 4m14.550044852s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:54:08.895246   57270 kubeadm.go:591] duration metric: took 4m22.444968141s to restartPrimaryControlPlane
	W0410 22:54:08.895308   57270 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0410 22:54:08.895339   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0410 22:54:07.742954   58186 addons.go:505] duration metric: took 2.120520274s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0410 22:54:07.910203   58186 pod_ready.go:102] pod "coredns-76f75df574-bvdp5" in "kube-system" namespace has status "Ready":"False"
	I0410 22:54:08.906369   58186 pod_ready.go:92] pod "coredns-76f75df574-bvdp5" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:08.906394   58186 pod_ready.go:81] duration metric: took 3.007348288s for pod "coredns-76f75df574-bvdp5" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.906407   58186 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-v2pp5" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.913564   58186 pod_ready.go:92] pod "coredns-76f75df574-v2pp5" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:08.913582   58186 pod_ready.go:81] duration metric: took 7.168463ms for pod "coredns-76f75df574-v2pp5" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.913592   58186 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.919270   58186 pod_ready.go:92] pod "etcd-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:08.919296   58186 pod_ready.go:81] duration metric: took 5.696297ms for pod "etcd-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.919308   58186 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.924389   58186 pod_ready.go:92] pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:08.924430   58186 pod_ready.go:81] duration metric: took 5.111624ms for pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.924443   58186 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.929296   58186 pod_ready.go:92] pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:08.929320   58186 pod_ready.go:81] duration metric: took 4.869073ms for pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.929333   58186 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xj5nq" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:09.305730   58186 pod_ready.go:92] pod "kube-proxy-xj5nq" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:09.305756   58186 pod_ready.go:81] duration metric: took 376.415901ms for pod "kube-proxy-xj5nq" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:09.305770   58186 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:09.703841   58186 pod_ready.go:92] pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:09.703869   58186 pod_ready.go:81] duration metric: took 398.090582ms for pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:09.703881   58186 pod_ready.go:38] duration metric: took 3.812625835s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:54:09.703898   58186 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:54:09.703957   58186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:54:09.720728   58186 api_server.go:72] duration metric: took 4.098354983s to wait for apiserver process to appear ...
	I0410 22:54:09.720763   58186 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:54:09.720786   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:54:09.726522   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I0410 22:54:09.727951   58186 api_server.go:141] control plane version: v1.29.3
	I0410 22:54:09.727979   58186 api_server.go:131] duration metric: took 7.20731ms to wait for apiserver health ...
	I0410 22:54:09.727989   58186 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:54:09.908166   58186 system_pods.go:59] 9 kube-system pods found
	I0410 22:54:09.908203   58186 system_pods.go:61] "coredns-76f75df574-bvdp5" [1cc8a326-77ef-469f-abf7-082ff8a44782] Running
	I0410 22:54:09.908212   58186 system_pods.go:61] "coredns-76f75df574-v2pp5" [2138fb5e-9c16-4a25-85d3-3d84b361a1e8] Running
	I0410 22:54:09.908217   58186 system_pods.go:61] "etcd-embed-certs-706500" [4a4b25f6-f8b7-49a2-9dfb-74d480775de7] Running
	I0410 22:54:09.908222   58186 system_pods.go:61] "kube-apiserver-embed-certs-706500" [33bf3126-e3fa-49f8-829d-8fb5ab407062] Running
	I0410 22:54:09.908227   58186 system_pods.go:61] "kube-controller-manager-embed-certs-706500" [97ca8487-eb31-43f8-ab20-873a134bdcad] Running
	I0410 22:54:09.908232   58186 system_pods.go:61] "kube-proxy-xj5nq" [c1bb1878-3e4b-4647-a3a7-cb327ccbd364] Running
	I0410 22:54:09.908236   58186 system_pods.go:61] "kube-scheduler-embed-certs-706500" [977f178e-11a1-46a9-87a1-04a5a915c267] Running
	I0410 22:54:09.908246   58186 system_pods.go:61] "metrics-server-57f55c9bc5-9mrmz" [a4ccd29a-d27e-4291-ac8c-3135d65f8a2a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:54:09.908251   58186 system_pods.go:61] "storage-provisioner" [8ad8e533-69ca-4eb5-9595-e6808dc0ff1a] Running
	I0410 22:54:09.908263   58186 system_pods.go:74] duration metric: took 180.267138ms to wait for pod list to return data ...
	I0410 22:54:09.908276   58186 default_sa.go:34] waiting for default service account to be created ...
	I0410 22:54:10.103556   58186 default_sa.go:45] found service account: "default"
	I0410 22:54:10.103586   58186 default_sa.go:55] duration metric: took 195.301798ms for default service account to be created ...
	I0410 22:54:10.103597   58186 system_pods.go:116] waiting for k8s-apps to be running ...
	I0410 22:54:10.309537   58186 system_pods.go:86] 9 kube-system pods found
	I0410 22:54:10.309566   58186 system_pods.go:89] "coredns-76f75df574-bvdp5" [1cc8a326-77ef-469f-abf7-082ff8a44782] Running
	I0410 22:54:10.309572   58186 system_pods.go:89] "coredns-76f75df574-v2pp5" [2138fb5e-9c16-4a25-85d3-3d84b361a1e8] Running
	I0410 22:54:10.309578   58186 system_pods.go:89] "etcd-embed-certs-706500" [4a4b25f6-f8b7-49a2-9dfb-74d480775de7] Running
	I0410 22:54:10.309583   58186 system_pods.go:89] "kube-apiserver-embed-certs-706500" [33bf3126-e3fa-49f8-829d-8fb5ab407062] Running
	I0410 22:54:10.309588   58186 system_pods.go:89] "kube-controller-manager-embed-certs-706500" [97ca8487-eb31-43f8-ab20-873a134bdcad] Running
	I0410 22:54:10.309592   58186 system_pods.go:89] "kube-proxy-xj5nq" [c1bb1878-3e4b-4647-a3a7-cb327ccbd364] Running
	I0410 22:54:10.309596   58186 system_pods.go:89] "kube-scheduler-embed-certs-706500" [977f178e-11a1-46a9-87a1-04a5a915c267] Running
	I0410 22:54:10.309602   58186 system_pods.go:89] "metrics-server-57f55c9bc5-9mrmz" [a4ccd29a-d27e-4291-ac8c-3135d65f8a2a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:54:10.309607   58186 system_pods.go:89] "storage-provisioner" [8ad8e533-69ca-4eb5-9595-e6808dc0ff1a] Running
	I0410 22:54:10.309617   58186 system_pods.go:126] duration metric: took 206.014442ms to wait for k8s-apps to be running ...
	I0410 22:54:10.309624   58186 system_svc.go:44] waiting for kubelet service to be running ....
	I0410 22:54:10.309666   58186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:54:10.324614   58186 system_svc.go:56] duration metric: took 14.97975ms WaitForService to wait for kubelet
	I0410 22:54:10.324651   58186 kubeadm.go:576] duration metric: took 4.702277594s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 22:54:10.324669   58186 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:54:10.503911   58186 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:54:10.503939   58186 node_conditions.go:123] node cpu capacity is 2
	I0410 22:54:10.503949   58186 node_conditions.go:105] duration metric: took 179.27538ms to run NodePressure ...
	I0410 22:54:10.503959   58186 start.go:240] waiting for startup goroutines ...
	I0410 22:54:10.503966   58186 start.go:245] waiting for cluster config update ...
	I0410 22:54:10.503975   58186 start.go:254] writing updated cluster config ...
	I0410 22:54:10.504242   58186 ssh_runner.go:195] Run: rm -f paused
	I0410 22:54:10.555500   58186 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0410 22:54:10.557941   58186 out.go:177] * Done! kubectl is now configured to use "embed-certs-706500" cluster and "default" namespace by default
	I0410 22:54:37.664290   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:54:37.664604   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:54:37.664634   57719 kubeadm.go:309] 
	I0410 22:54:37.664776   57719 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0410 22:54:37.664843   57719 kubeadm.go:309] 		timed out waiting for the condition
	I0410 22:54:37.664854   57719 kubeadm.go:309] 
	I0410 22:54:37.664901   57719 kubeadm.go:309] 	This error is likely caused by:
	I0410 22:54:37.664968   57719 kubeadm.go:309] 		- The kubelet is not running
	I0410 22:54:37.665086   57719 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0410 22:54:37.665101   57719 kubeadm.go:309] 
	I0410 22:54:37.665245   57719 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0410 22:54:37.665313   57719 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0410 22:54:37.665360   57719 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0410 22:54:37.665372   57719 kubeadm.go:309] 
	I0410 22:54:37.665579   57719 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0410 22:54:37.665695   57719 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0410 22:54:37.665707   57719 kubeadm.go:309] 
	I0410 22:54:37.665868   57719 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0410 22:54:37.666063   57719 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0410 22:54:37.666192   57719 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0410 22:54:37.666272   57719 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0410 22:54:37.666284   57719 kubeadm.go:309] 
	I0410 22:54:37.667202   57719 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 22:54:37.667329   57719 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0410 22:54:37.667420   57719 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0410 22:54:37.667555   57719 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0410 22:54:37.667623   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0410 22:54:40.975782   57270 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.080419546s)
	I0410 22:54:40.975854   57270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:54:40.993677   57270 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:54:41.006185   57270 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:54:41.016820   57270 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:54:41.016850   57270 kubeadm.go:156] found existing configuration files:
	
	I0410 22:54:41.016985   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:54:41.026802   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:54:41.026871   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:54:41.036992   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:54:41.046896   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:54:41.046962   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:54:41.057184   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:54:41.067261   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:54:41.067321   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:54:41.077846   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:54:41.087745   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:54:41.087795   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:54:41.098660   57270 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 22:54:41.159736   57270 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-rc.1
	I0410 22:54:41.159807   57270 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 22:54:41.316137   57270 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 22:54:41.316279   57270 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 22:54:41.316446   57270 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 22:54:41.559720   57270 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 22:54:41.561946   57270 out.go:204]   - Generating certificates and keys ...
	I0410 22:54:41.562039   57270 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 22:54:41.562141   57270 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 22:54:41.562211   57270 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0410 22:54:41.562275   57270 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0410 22:54:41.562352   57270 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0410 22:54:41.562460   57270 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0410 22:54:41.562572   57270 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0410 22:54:41.562667   57270 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0410 22:54:41.562803   57270 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0410 22:54:41.562917   57270 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0410 22:54:41.562992   57270 kubeadm.go:309] [certs] Using the existing "sa" key
	I0410 22:54:41.563081   57270 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 22:54:41.723729   57270 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 22:54:41.834274   57270 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0410 22:54:41.936758   57270 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 22:54:42.038298   57270 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 22:54:42.229459   57270 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 22:54:42.230047   57270 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 22:54:42.233021   57270 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 22:54:42.236068   57270 out.go:204]   - Booting up control plane ...
	I0410 22:54:42.236197   57270 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 22:54:42.236303   57270 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 22:54:42.236421   57270 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 22:54:42.255487   57270 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 22:54:42.256345   57270 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 22:54:42.256450   57270 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 22:54:42.391623   57270 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0410 22:54:42.391736   57270 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0410 22:54:43.393825   57270 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.00265832s
	I0410 22:54:43.393973   57270 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0410 22:54:43.156141   57719 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.488487447s)
	I0410 22:54:43.156227   57719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:54:43.170709   57719 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:54:43.180624   57719 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:54:43.180647   57719 kubeadm.go:156] found existing configuration files:
	
	I0410 22:54:43.180701   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:54:43.190482   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:54:43.190533   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:54:43.200261   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:54:43.210061   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:54:43.210116   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:54:43.220430   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:54:43.230810   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:54:43.230877   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:54:43.241141   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:54:43.251043   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:54:43.251111   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:54:43.261163   57719 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 22:54:43.534002   57719 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 22:54:48.398196   57270 kubeadm.go:309] [api-check] The API server is healthy after 5.002218646s
	I0410 22:54:48.410618   57270 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0410 22:54:48.430553   57270 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0410 22:54:48.465343   57270 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0410 22:54:48.465614   57270 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-646133 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0410 22:54:48.489066   57270 kubeadm.go:309] [bootstrap-token] Using token: 14xwwp.uyth37qsjfn0mpcx
	I0410 22:54:48.490984   57270 out.go:204]   - Configuring RBAC rules ...
	I0410 22:54:48.491116   57270 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0410 22:54:48.502789   57270 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0410 22:54:48.516871   57270 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0410 22:54:48.523600   57270 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0410 22:54:48.527939   57270 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0410 22:54:48.537216   57270 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0410 22:54:48.806350   57270 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0410 22:54:49.234618   57270 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0410 22:54:49.803640   57270 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0410 22:54:49.804948   57270 kubeadm.go:309] 
	I0410 22:54:49.805074   57270 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0410 22:54:49.805095   57270 kubeadm.go:309] 
	I0410 22:54:49.805194   57270 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0410 22:54:49.805209   57270 kubeadm.go:309] 
	I0410 22:54:49.805240   57270 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0410 22:54:49.805323   57270 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0410 22:54:49.805403   57270 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0410 22:54:49.805415   57270 kubeadm.go:309] 
	I0410 22:54:49.805482   57270 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0410 22:54:49.805489   57270 kubeadm.go:309] 
	I0410 22:54:49.805562   57270 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0410 22:54:49.805580   57270 kubeadm.go:309] 
	I0410 22:54:49.805646   57270 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0410 22:54:49.805781   57270 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0410 22:54:49.805888   57270 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0410 22:54:49.805901   57270 kubeadm.go:309] 
	I0410 22:54:49.806038   57270 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0410 22:54:49.806143   57270 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0410 22:54:49.806154   57270 kubeadm.go:309] 
	I0410 22:54:49.806262   57270 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 14xwwp.uyth37qsjfn0mpcx \
	I0410 22:54:49.806398   57270 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f23510b5ac28f8f42919becc6f4945068a05a3fcf79470ca514b0af3de7a2bb0 \
	I0410 22:54:49.806438   57270 kubeadm.go:309] 	--control-plane 
	I0410 22:54:49.806456   57270 kubeadm.go:309] 
	I0410 22:54:49.806565   57270 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0410 22:54:49.806581   57270 kubeadm.go:309] 
	I0410 22:54:49.806661   57270 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 14xwwp.uyth37qsjfn0mpcx \
	I0410 22:54:49.806777   57270 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f23510b5ac28f8f42919becc6f4945068a05a3fcf79470ca514b0af3de7a2bb0 
	I0410 22:54:49.808385   57270 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 22:54:49.808455   57270 cni.go:84] Creating CNI manager for ""
	I0410 22:54:49.808473   57270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:54:49.811276   57270 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0410 22:54:49.812840   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0410 22:54:49.829865   57270 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0410 22:54:49.854383   57270 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0410 22:54:49.854454   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:49.854456   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-646133 minikube.k8s.io/updated_at=2024_04_10T22_54_49_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2 minikube.k8s.io/name=no-preload-646133 minikube.k8s.io/primary=true
	I0410 22:54:49.888254   57270 ops.go:34] apiserver oom_adj: -16
	I0410 22:54:50.073922   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:50.574248   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:51.074134   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:51.574654   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:52.074970   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:52.574248   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:53.074799   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:53.574902   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:54.074695   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:54.574038   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:55.074975   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:55.574297   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:56.074490   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:56.574490   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:57.074280   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:57.574569   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:58.074654   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:58.574740   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:59.074630   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:59.574546   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:00.075044   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:00.574740   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:01.074961   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:01.574004   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:02.074121   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:02.574476   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:02.705604   57270 kubeadm.go:1107] duration metric: took 12.851213125s to wait for elevateKubeSystemPrivileges
	W0410 22:55:02.705636   57270 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0410 22:55:02.705644   57270 kubeadm.go:393] duration metric: took 5m16.306442396s to StartCluster
	I0410 22:55:02.705660   57270 settings.go:142] acquiring lock: {Name:mk5dc8e9a07a91433645b19ffba859d70c73be71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:55:02.705739   57270 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:55:02.707592   57270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/kubeconfig: {Name:mkd6f498bec10d7b0d2291f83f5e27766227bc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:55:02.707844   57270 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.17 Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0410 22:55:02.709479   57270 out.go:177] * Verifying Kubernetes components...
	I0410 22:55:02.707944   57270 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0410 22:55:02.708074   57270 config.go:182] Loaded profile config "no-preload-646133": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.1
	I0410 22:55:02.710816   57270 addons.go:69] Setting storage-provisioner=true in profile "no-preload-646133"
	I0410 22:55:02.710827   57270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:55:02.710854   57270 addons.go:234] Setting addon storage-provisioner=true in "no-preload-646133"
	W0410 22:55:02.710865   57270 addons.go:243] addon storage-provisioner should already be in state true
	I0410 22:55:02.710889   57270 host.go:66] Checking if "no-preload-646133" exists ...
	I0410 22:55:02.710819   57270 addons.go:69] Setting default-storageclass=true in profile "no-preload-646133"
	I0410 22:55:02.710975   57270 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-646133"
	I0410 22:55:02.710821   57270 addons.go:69] Setting metrics-server=true in profile "no-preload-646133"
	I0410 22:55:02.711079   57270 addons.go:234] Setting addon metrics-server=true in "no-preload-646133"
	W0410 22:55:02.711090   57270 addons.go:243] addon metrics-server should already be in state true
	I0410 22:55:02.711119   57270 host.go:66] Checking if "no-preload-646133" exists ...
	I0410 22:55:02.711325   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.711349   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.711352   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.711382   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.711486   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.711507   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.729696   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44631
	I0410 22:55:02.730179   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.730725   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.730751   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.731138   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35903
	I0410 22:55:02.731161   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43589
	I0410 22:55:02.731223   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.731532   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.731551   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.731920   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.731951   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.732083   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.732103   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.732266   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.732290   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.732642   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.732692   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.732892   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetState
	I0410 22:55:02.733291   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.733336   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.737245   57270 addons.go:234] Setting addon default-storageclass=true in "no-preload-646133"
	W0410 22:55:02.737274   57270 addons.go:243] addon default-storageclass should already be in state true
	I0410 22:55:02.737304   57270 host.go:66] Checking if "no-preload-646133" exists ...
	I0410 22:55:02.737674   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.737710   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.749656   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40775
	I0410 22:55:02.750133   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.751030   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.751054   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.751467   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.751642   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetState
	I0410 22:55:02.752548   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37217
	I0410 22:55:02.753119   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.753727   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:55:02.753903   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.753918   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.755963   57270 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0410 22:55:02.754443   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.757499   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42427
	I0410 22:55:02.757548   57270 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0410 22:55:02.757559   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0410 22:55:02.757576   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:55:02.757684   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetState
	I0410 22:55:02.758428   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.758880   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.758893   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.759783   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.760197   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.760224   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.760379   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:55:02.762291   57270 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:55:02.761210   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.761741   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:55:02.763819   57270 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:55:02.763907   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0410 22:55:02.763918   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:55:02.763841   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:55:02.763963   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.764040   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:55:02.764153   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:55:02.764239   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:55:02.767729   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.767758   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:55:02.767776   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.767730   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:55:02.767951   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:55:02.768100   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:55:02.768223   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:55:02.782788   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39421
	I0410 22:55:02.783161   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.783701   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.783726   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.784081   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.784347   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetState
	I0410 22:55:02.785932   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:55:02.786186   57270 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0410 22:55:02.786200   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0410 22:55:02.786217   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:55:02.789193   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.789526   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:55:02.789576   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.789837   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:55:02.790096   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:55:02.790278   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:55:02.790431   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:55:02.922239   57270 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:55:02.957665   57270 node_ready.go:35] waiting up to 6m0s for node "no-preload-646133" to be "Ready" ...
	I0410 22:55:02.981427   57270 node_ready.go:49] node "no-preload-646133" has status "Ready":"True"
	I0410 22:55:02.981449   57270 node_ready.go:38] duration metric: took 23.75134ms for node "no-preload-646133" to be "Ready" ...
	I0410 22:55:02.981458   57270 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:55:02.986557   57270 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jm2zw" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:03.024992   57270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:55:03.032744   57270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0410 22:55:03.156968   57270 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0410 22:55:03.156989   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0410 22:55:03.237497   57270 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0410 22:55:03.237522   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0410 22:55:03.274982   57270 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:55:03.275005   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0410 22:55:03.317464   57270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:55:03.512107   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.512130   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.512173   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.512198   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.512435   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.512455   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.512525   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.512530   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.512541   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.512542   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.512538   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.512551   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.512558   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.512497   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.512782   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.512799   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.512876   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.512915   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.512878   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.525688   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.525707   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.526017   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.526042   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.526057   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.905597   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.905627   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.906016   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.906081   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.906089   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.906101   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.906107   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.906353   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.906355   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.906381   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.906392   57270 addons.go:470] Verifying addon metrics-server=true in "no-preload-646133"
	I0410 22:55:03.908467   57270 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0410 22:55:03.910238   57270 addons.go:505] duration metric: took 1.20230017s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0410 22:55:05.035855   57270 pod_ready.go:102] pod "coredns-7db6d8ff4d-jm2zw" in "kube-system" namespace has status "Ready":"False"
	I0410 22:55:05.493330   57270 pod_ready.go:92] pod "coredns-7db6d8ff4d-jm2zw" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.493354   57270 pod_ready.go:81] duration metric: took 2.506773848s for pod "coredns-7db6d8ff4d-jm2zw" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.493365   57270 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-v599p" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.498568   57270 pod_ready.go:92] pod "coredns-7db6d8ff4d-v599p" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.498593   57270 pod_ready.go:81] duration metric: took 5.220548ms for pod "coredns-7db6d8ff4d-v599p" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.498604   57270 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.505133   57270 pod_ready.go:92] pod "etcd-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.505156   57270 pod_ready.go:81] duration metric: took 6.544104ms for pod "etcd-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.505165   57270 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.510391   57270 pod_ready.go:92] pod "kube-apiserver-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.510415   57270 pod_ready.go:81] duration metric: took 5.2417ms for pod "kube-apiserver-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.510427   57270 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.524717   57270 pod_ready.go:92] pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.524737   57270 pod_ready.go:81] duration metric: took 14.302445ms for pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.524747   57270 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-24vhc" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.891005   57270 pod_ready.go:92] pod "kube-proxy-24vhc" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.891029   57270 pod_ready.go:81] duration metric: took 366.275947ms for pod "kube-proxy-24vhc" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.891039   57270 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:06.291050   57270 pod_ready.go:92] pod "kube-scheduler-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:06.291075   57270 pod_ready.go:81] duration metric: took 400.028808ms for pod "kube-scheduler-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:06.291084   57270 pod_ready.go:38] duration metric: took 3.309617471s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:55:06.291101   57270 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:55:06.291165   57270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:55:06.308433   57270 api_server.go:72] duration metric: took 3.600549626s to wait for apiserver process to appear ...
	I0410 22:55:06.308461   57270 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:55:06.308479   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:55:06.312630   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 200:
	ok
	I0410 22:55:06.313434   57270 api_server.go:141] control plane version: v1.30.0-rc.1
	I0410 22:55:06.313457   57270 api_server.go:131] duration metric: took 4.989017ms to wait for apiserver health ...
	I0410 22:55:06.313466   57270 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:55:06.494780   57270 system_pods.go:59] 9 kube-system pods found
	I0410 22:55:06.494813   57270 system_pods.go:61] "coredns-7db6d8ff4d-jm2zw" [9d8b995c-717e-43a5-a963-f07a4f7a76a8] Running
	I0410 22:55:06.494820   57270 system_pods.go:61] "coredns-7db6d8ff4d-v599p" [f30c2827-5930-41d4-82b7-edfb839b3a74] Running
	I0410 22:55:06.494826   57270 system_pods.go:61] "etcd-no-preload-646133" [43f97c7f-c75c-4af4-80c1-11194210d8dd] Running
	I0410 22:55:06.494833   57270 system_pods.go:61] "kube-apiserver-no-preload-646133" [ca38242e-c714-49f7-a2df-3f26c6c37d44] Running
	I0410 22:55:06.494838   57270 system_pods.go:61] "kube-controller-manager-no-preload-646133" [a4c79943-eacf-46a5-b57a-f262c7dc97ef] Running
	I0410 22:55:06.494843   57270 system_pods.go:61] "kube-proxy-24vhc" [ca175e85-76f2-47d2-91a5-0248194a88e8] Running
	I0410 22:55:06.494848   57270 system_pods.go:61] "kube-scheduler-no-preload-646133" [fb5f38f5-0c9d-4176-8b3e-4d8c5f71c5cf] Running
	I0410 22:55:06.494856   57270 system_pods.go:61] "metrics-server-569cc877fc-bj59f" [4aace435-90be-456a-8a85-dbee0026212c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:55:06.494862   57270 system_pods.go:61] "storage-provisioner" [3232daa9-da88-4152-97c8-e86b3d50b0b8] Running
	I0410 22:55:06.494871   57270 system_pods.go:74] duration metric: took 181.399385ms to wait for pod list to return data ...
	I0410 22:55:06.494890   57270 default_sa.go:34] waiting for default service account to be created ...
	I0410 22:55:06.690158   57270 default_sa.go:45] found service account: "default"
	I0410 22:55:06.690185   57270 default_sa.go:55] duration metric: took 195.289153ms for default service account to be created ...
	I0410 22:55:06.690194   57270 system_pods.go:116] waiting for k8s-apps to be running ...
	I0410 22:55:06.893604   57270 system_pods.go:86] 9 kube-system pods found
	I0410 22:55:06.893632   57270 system_pods.go:89] "coredns-7db6d8ff4d-jm2zw" [9d8b995c-717e-43a5-a963-f07a4f7a76a8] Running
	I0410 22:55:06.893638   57270 system_pods.go:89] "coredns-7db6d8ff4d-v599p" [f30c2827-5930-41d4-82b7-edfb839b3a74] Running
	I0410 22:55:06.893642   57270 system_pods.go:89] "etcd-no-preload-646133" [43f97c7f-c75c-4af4-80c1-11194210d8dd] Running
	I0410 22:55:06.893646   57270 system_pods.go:89] "kube-apiserver-no-preload-646133" [ca38242e-c714-49f7-a2df-3f26c6c37d44] Running
	I0410 22:55:06.893651   57270 system_pods.go:89] "kube-controller-manager-no-preload-646133" [a4c79943-eacf-46a5-b57a-f262c7dc97ef] Running
	I0410 22:55:06.893656   57270 system_pods.go:89] "kube-proxy-24vhc" [ca175e85-76f2-47d2-91a5-0248194a88e8] Running
	I0410 22:55:06.893659   57270 system_pods.go:89] "kube-scheduler-no-preload-646133" [fb5f38f5-0c9d-4176-8b3e-4d8c5f71c5cf] Running
	I0410 22:55:06.893665   57270 system_pods.go:89] "metrics-server-569cc877fc-bj59f" [4aace435-90be-456a-8a85-dbee0026212c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:55:06.893670   57270 system_pods.go:89] "storage-provisioner" [3232daa9-da88-4152-97c8-e86b3d50b0b8] Running
	I0410 22:55:06.893679   57270 system_pods.go:126] duration metric: took 203.480657ms to wait for k8s-apps to be running ...
	I0410 22:55:06.893686   57270 system_svc.go:44] waiting for kubelet service to be running ....
	I0410 22:55:06.893730   57270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:55:06.909072   57270 system_svc.go:56] duration metric: took 15.374403ms WaitForService to wait for kubelet
	I0410 22:55:06.909096   57270 kubeadm.go:576] duration metric: took 4.20122533s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 22:55:06.909115   57270 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:55:07.090651   57270 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:55:07.090673   57270 node_conditions.go:123] node cpu capacity is 2
	I0410 22:55:07.090682   57270 node_conditions.go:105] duration metric: took 181.563241ms to run NodePressure ...
	I0410 22:55:07.090692   57270 start.go:240] waiting for startup goroutines ...
	I0410 22:55:07.090698   57270 start.go:245] waiting for cluster config update ...
	I0410 22:55:07.090707   57270 start.go:254] writing updated cluster config ...
	I0410 22:55:07.090957   57270 ssh_runner.go:195] Run: rm -f paused
	I0410 22:55:07.140644   57270 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.1 (minor skew: 1)
	I0410 22:55:07.142770   57270 out.go:177] * Done! kubectl is now configured to use "no-preload-646133" cluster and "default" namespace by default
	I0410 22:56:40.435994   57719 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0410 22:56:40.436123   57719 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0410 22:56:40.437810   57719 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0410 22:56:40.437872   57719 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 22:56:40.437967   57719 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 22:56:40.438082   57719 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 22:56:40.438235   57719 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 22:56:40.438321   57719 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 22:56:40.440009   57719 out.go:204]   - Generating certificates and keys ...
	I0410 22:56:40.440110   57719 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 22:56:40.440210   57719 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 22:56:40.440336   57719 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0410 22:56:40.440417   57719 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0410 22:56:40.440501   57719 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0410 22:56:40.440563   57719 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0410 22:56:40.440622   57719 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0410 22:56:40.440685   57719 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0410 22:56:40.440752   57719 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0410 22:56:40.440858   57719 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0410 22:56:40.440923   57719 kubeadm.go:309] [certs] Using the existing "sa" key
	I0410 22:56:40.441004   57719 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 22:56:40.441076   57719 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 22:56:40.441131   57719 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 22:56:40.441185   57719 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 22:56:40.441242   57719 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 22:56:40.441375   57719 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 22:56:40.441501   57719 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 22:56:40.441565   57719 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 22:56:40.441658   57719 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 22:56:40.443122   57719 out.go:204]   - Booting up control plane ...
	I0410 22:56:40.443230   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 22:56:40.443332   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 22:56:40.443431   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 22:56:40.443549   57719 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 22:56:40.443710   57719 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0410 22:56:40.443783   57719 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0410 22:56:40.443883   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.444111   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.444200   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.444429   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.444520   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.444761   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.444869   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.445124   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.445235   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.445416   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.445423   57719 kubeadm.go:309] 
	I0410 22:56:40.445465   57719 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0410 22:56:40.445512   57719 kubeadm.go:309] 		timed out waiting for the condition
	I0410 22:56:40.445520   57719 kubeadm.go:309] 
	I0410 22:56:40.445548   57719 kubeadm.go:309] 	This error is likely caused by:
	I0410 22:56:40.445595   57719 kubeadm.go:309] 		- The kubelet is not running
	I0410 22:56:40.445712   57719 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0410 22:56:40.445722   57719 kubeadm.go:309] 
	I0410 22:56:40.445880   57719 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0410 22:56:40.445931   57719 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0410 22:56:40.445967   57719 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0410 22:56:40.445972   57719 kubeadm.go:309] 
	I0410 22:56:40.446095   57719 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0410 22:56:40.446190   57719 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0410 22:56:40.446201   57719 kubeadm.go:309] 
	I0410 22:56:40.446326   57719 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0410 22:56:40.446452   57719 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0410 22:56:40.446548   57719 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0410 22:56:40.446611   57719 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0410 22:56:40.446659   57719 kubeadm.go:309] 
	I0410 22:56:40.446681   57719 kubeadm.go:393] duration metric: took 8m5.163157284s to StartCluster
	I0410 22:56:40.446805   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:56:40.446880   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:56:40.499163   57719 cri.go:89] found id: ""
	I0410 22:56:40.499196   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.499205   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:56:40.499212   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:56:40.499292   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:56:40.545429   57719 cri.go:89] found id: ""
	I0410 22:56:40.545465   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.545473   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:56:40.545479   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:56:40.545538   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:56:40.583842   57719 cri.go:89] found id: ""
	I0410 22:56:40.583870   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.583880   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:56:40.583887   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:56:40.583957   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:56:40.621054   57719 cri.go:89] found id: ""
	I0410 22:56:40.621075   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.621083   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:56:40.621091   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:56:40.621149   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:56:40.665133   57719 cri.go:89] found id: ""
	I0410 22:56:40.665161   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.665168   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:56:40.665175   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:56:40.665231   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:56:40.707490   57719 cri.go:89] found id: ""
	I0410 22:56:40.707519   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.707529   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:56:40.707536   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:56:40.707598   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:56:40.748539   57719 cri.go:89] found id: ""
	I0410 22:56:40.748565   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.748576   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:56:40.748584   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:56:40.748644   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:56:40.792326   57719 cri.go:89] found id: ""
	I0410 22:56:40.792349   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.792358   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:56:40.792366   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:56:40.792376   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:56:40.844309   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:56:40.844346   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:56:40.859678   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:56:40.859715   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:56:40.950099   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:56:40.950123   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:56:40.950141   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:56:41.073547   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:56:41.073589   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0410 22:56:41.124970   57719 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0410 22:56:41.125024   57719 out.go:239] * 
	W0410 22:56:41.125096   57719 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0410 22:56:41.125129   57719 out.go:239] * 
	W0410 22:56:41.126153   57719 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0410 22:56:41.129869   57719 out.go:177] 
	W0410 22:56:41.131207   57719 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0410 22:56:41.131286   57719 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0410 22:56:41.131326   57719 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0410 22:56:41.133049   57719 out.go:177] 
	
	
	==> CRI-O <==
	Apr 10 23:05:46 old-k8s-version-862528 crio[650]: time="2024-04-10 23:05:46.346576940Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712790346346548874,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a764531-37d3-4b5e-88d9-2a37bb633d2d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:05:46 old-k8s-version-862528 crio[650]: time="2024-04-10 23:05:46.347554095Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6c762ce2-2037-451f-85fa-7800cd690a65 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:05:46 old-k8s-version-862528 crio[650]: time="2024-04-10 23:05:46.347612828Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6c762ce2-2037-451f-85fa-7800cd690a65 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:05:46 old-k8s-version-862528 crio[650]: time="2024-04-10 23:05:46.347656892Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6c762ce2-2037-451f-85fa-7800cd690a65 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:05:46 old-k8s-version-862528 crio[650]: time="2024-04-10 23:05:46.382308686Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=093d5027-b75e-4e01-9eda-dc92123c32af name=/runtime.v1.RuntimeService/Version
	Apr 10 23:05:46 old-k8s-version-862528 crio[650]: time="2024-04-10 23:05:46.382406324Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=093d5027-b75e-4e01-9eda-dc92123c32af name=/runtime.v1.RuntimeService/Version
	Apr 10 23:05:46 old-k8s-version-862528 crio[650]: time="2024-04-10 23:05:46.384406830Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=98d70bbb-9854-461c-a364-97b7f03154ec name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:05:46 old-k8s-version-862528 crio[650]: time="2024-04-10 23:05:46.384994482Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712790346384951434,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=98d70bbb-9854-461c-a364-97b7f03154ec name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:05:46 old-k8s-version-862528 crio[650]: time="2024-04-10 23:05:46.385711789Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e4285a9-3324-43a1-913a-c4f49a4523f5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:05:46 old-k8s-version-862528 crio[650]: time="2024-04-10 23:05:46.385787445Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e4285a9-3324-43a1-913a-c4f49a4523f5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:05:46 old-k8s-version-862528 crio[650]: time="2024-04-10 23:05:46.385834548Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6e4285a9-3324-43a1-913a-c4f49a4523f5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:05:46 old-k8s-version-862528 crio[650]: time="2024-04-10 23:05:46.420018557Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=92eca173-4e5f-4fa6-a7a8-05cf4302a22f name=/runtime.v1.RuntimeService/Version
	Apr 10 23:05:46 old-k8s-version-862528 crio[650]: time="2024-04-10 23:05:46.420108078Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=92eca173-4e5f-4fa6-a7a8-05cf4302a22f name=/runtime.v1.RuntimeService/Version
	Apr 10 23:05:46 old-k8s-version-862528 crio[650]: time="2024-04-10 23:05:46.421955167Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=53a4e29e-a52b-40ef-8173-0809b21162d7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:05:46 old-k8s-version-862528 crio[650]: time="2024-04-10 23:05:46.422405753Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712790346422380468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=53a4e29e-a52b-40ef-8173-0809b21162d7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:05:46 old-k8s-version-862528 crio[650]: time="2024-04-10 23:05:46.423076042Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5af861a4-9178-4e2d-a2d5-4d059dabf1db name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:05:46 old-k8s-version-862528 crio[650]: time="2024-04-10 23:05:46.423268936Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5af861a4-9178-4e2d-a2d5-4d059dabf1db name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:05:46 old-k8s-version-862528 crio[650]: time="2024-04-10 23:05:46.423352923Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5af861a4-9178-4e2d-a2d5-4d059dabf1db name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:05:46 old-k8s-version-862528 crio[650]: time="2024-04-10 23:05:46.458693095Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3549d9f8-b65e-47b4-9837-0855346aa0de name=/runtime.v1.RuntimeService/Version
	Apr 10 23:05:46 old-k8s-version-862528 crio[650]: time="2024-04-10 23:05:46.458817169Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3549d9f8-b65e-47b4-9837-0855346aa0de name=/runtime.v1.RuntimeService/Version
	Apr 10 23:05:46 old-k8s-version-862528 crio[650]: time="2024-04-10 23:05:46.460371321Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d179a898-b276-4850-90cb-aeb57222c337 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:05:46 old-k8s-version-862528 crio[650]: time="2024-04-10 23:05:46.460785337Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712790346460757666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d179a898-b276-4850-90cb-aeb57222c337 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:05:46 old-k8s-version-862528 crio[650]: time="2024-04-10 23:05:46.461416426Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=de38b59f-f4a5-48cd-8639-43556048a0b8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:05:46 old-k8s-version-862528 crio[650]: time="2024-04-10 23:05:46.461469861Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=de38b59f-f4a5-48cd-8639-43556048a0b8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:05:46 old-k8s-version-862528 crio[650]: time="2024-04-10 23:05:46.461505743Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=de38b59f-f4a5-48cd-8639-43556048a0b8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr10 22:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052439] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041651] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.553485] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.712541] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.654645] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.367023] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.061213] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068973] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.198082] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.121287] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.251878] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +6.515656] systemd-fstab-generator[837]: Ignoring "noauto" option for root device
	[  +0.064093] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.589961] systemd-fstab-generator[961]: Ignoring "noauto" option for root device
	[ +11.062720] kauditd_printk_skb: 46 callbacks suppressed
	[Apr10 22:52] systemd-fstab-generator[4966]: Ignoring "noauto" option for root device
	[Apr10 22:54] systemd-fstab-generator[5254]: Ignoring "noauto" option for root device
	[  +0.070219] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 23:05:46 up 17 min,  0 users,  load average: 0.05, 0.03, 0.03
	Linux old-k8s-version-862528 5.10.207 #1 SMP Wed Apr 10 14:57:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 10 23:05:42 old-k8s-version-862528 kubelet[6425]: net.cgoLookupIPCNAME(0x48ab5d6, 0x3, 0xc000ad3230, 0x1f, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Apr 10 23:05:42 old-k8s-version-862528 kubelet[6425]:         /usr/local/go/src/net/cgo_unix.go:161 +0x16b
	Apr 10 23:05:42 old-k8s-version-862528 kubelet[6425]: net.cgoIPLookup(0xc000193f80, 0x48ab5d6, 0x3, 0xc000ad3230, 0x1f)
	Apr 10 23:05:42 old-k8s-version-862528 kubelet[6425]:         /usr/local/go/src/net/cgo_unix.go:218 +0x67
	Apr 10 23:05:42 old-k8s-version-862528 kubelet[6425]: created by net.cgoLookupIP
	Apr 10 23:05:42 old-k8s-version-862528 kubelet[6425]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Apr 10 23:05:42 old-k8s-version-862528 kubelet[6425]: goroutine 148 [select]:
	Apr 10 23:05:42 old-k8s-version-862528 kubelet[6425]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000204a50, 0xc000af5901, 0xc000b3b180, 0xc000b4c5e0, 0xc000b7c440, 0xc000b7c400)
	Apr 10 23:05:42 old-k8s-version-862528 kubelet[6425]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Apr 10 23:05:42 old-k8s-version-862528 kubelet[6425]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000af5980, 0x0, 0x0)
	Apr 10 23:05:42 old-k8s-version-862528 kubelet[6425]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Apr 10 23:05:42 old-k8s-version-862528 kubelet[6425]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000863180)
	Apr 10 23:05:42 old-k8s-version-862528 kubelet[6425]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Apr 10 23:05:42 old-k8s-version-862528 kubelet[6425]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Apr 10 23:05:42 old-k8s-version-862528 kubelet[6425]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Apr 10 23:05:42 old-k8s-version-862528 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 10 23:05:42 old-k8s-version-862528 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 10 23:05:43 old-k8s-version-862528 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Apr 10 23:05:43 old-k8s-version-862528 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 10 23:05:43 old-k8s-version-862528 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 10 23:05:43 old-k8s-version-862528 kubelet[6435]: I0410 23:05:43.321602    6435 server.go:416] Version: v1.20.0
	Apr 10 23:05:43 old-k8s-version-862528 kubelet[6435]: I0410 23:05:43.321953    6435 server.go:837] Client rotation is on, will bootstrap in background
	Apr 10 23:05:43 old-k8s-version-862528 kubelet[6435]: I0410 23:05:43.326485    6435 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 10 23:05:43 old-k8s-version-862528 kubelet[6435]: W0410 23:05:43.328517    6435 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 10 23:05:43 old-k8s-version-862528 kubelet[6435]: I0410 23:05:43.329226    6435 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-862528 -n old-k8s-version-862528
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-862528 -n old-k8s-version-862528: exit status 2 (245.822472ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-862528" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (544.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-519831 -n default-k8s-diff-port-519831
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-10 23:11:54.066185479 +0000 UTC m=+6236.302614760
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-519831 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E0410 23:11:54.111935   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-519831 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (76.274305ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): namespaces "kubernetes-dashboard" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-519831 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-519831 -n default-k8s-diff-port-519831
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-519831 logs -n 25
E0410 23:11:54.866308   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-519831 logs -n 25: (3.015232228s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p kindnet-688825 sudo                               | kindnet-688825            | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:11 UTC | 10 Apr 24 23:11 UTC |
	|         | systemctl status kubelet --all                       |                           |         |                |                     |                     |
	|         | --full --no-pager                                    |                           |         |                |                     |                     |
	| ssh     | -p kindnet-688825 sudo                               | kindnet-688825            | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:11 UTC | 10 Apr 24 23:11 UTC |
	|         | systemctl cat kubelet                                |                           |         |                |                     |                     |
	|         | --no-pager                                           |                           |         |                |                     |                     |
	| ssh     | -p kindnet-688825 sudo                               | kindnet-688825            | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:11 UTC | 10 Apr 24 23:11 UTC |
	|         | journalctl -xeu kubelet --all                        |                           |         |                |                     |                     |
	|         | --full --no-pager                                    |                           |         |                |                     |                     |
	| ssh     | -p kindnet-688825 sudo cat                           | kindnet-688825            | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:11 UTC | 10 Apr 24 23:11 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |                |                     |                     |
	| ssh     | -p kindnet-688825 sudo cat                           | kindnet-688825            | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:11 UTC | 10 Apr 24 23:11 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |                |                     |                     |
	| ssh     | -p kindnet-688825 sudo                               | kindnet-688825            | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:11 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |                |                     |                     |
	|         | --full --no-pager                                    |                           |         |                |                     |                     |
	| ssh     | -p kindnet-688825 sudo                               | kindnet-688825            | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:11 UTC | 10 Apr 24 23:11 UTC |
	|         | systemctl cat docker                                 |                           |         |                |                     |                     |
	|         | --no-pager                                           |                           |         |                |                     |                     |
	| ssh     | -p kindnet-688825 sudo cat                           | kindnet-688825            | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:11 UTC | 10 Apr 24 23:11 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |                |                     |                     |
	| ssh     | -p kindnet-688825 sudo docker                        | kindnet-688825            | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:11 UTC |                     |
	|         | system info                                          |                           |         |                |                     |                     |
	| ssh     | -p kindnet-688825 sudo                               | kindnet-688825            | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:11 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |                |                     |                     |
	|         | --all --full --no-pager                              |                           |         |                |                     |                     |
	| ssh     | -p kindnet-688825 sudo                               | kindnet-688825            | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:11 UTC | 10 Apr 24 23:11 UTC |
	|         | systemctl cat cri-docker                             |                           |         |                |                     |                     |
	|         | --no-pager                                           |                           |         |                |                     |                     |
	| ssh     | -p kindnet-688825 sudo cat                           | kindnet-688825            | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:11 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |                |                     |                     |
	| ssh     | -p kindnet-688825 sudo cat                           | kindnet-688825            | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:11 UTC | 10 Apr 24 23:11 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |                |                     |                     |
	| ssh     | -p kindnet-688825 sudo                               | kindnet-688825            | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:11 UTC | 10 Apr 24 23:11 UTC |
	|         | cri-dockerd --version                                |                           |         |                |                     |                     |
	| ssh     | -p kindnet-688825 sudo                               | kindnet-688825            | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:11 UTC |                     |
	|         | systemctl status containerd                          |                           |         |                |                     |                     |
	|         | --all --full --no-pager                              |                           |         |                |                     |                     |
	| ssh     | -p kindnet-688825 sudo                               | kindnet-688825            | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:11 UTC | 10 Apr 24 23:11 UTC |
	|         | systemctl cat containerd                             |                           |         |                |                     |                     |
	|         | --no-pager                                           |                           |         |                |                     |                     |
	| ssh     | -p kindnet-688825 sudo cat                           | kindnet-688825            | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:11 UTC | 10 Apr 24 23:11 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |                |                     |                     |
	| ssh     | -p kindnet-688825 sudo cat                           | kindnet-688825            | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:11 UTC | 10 Apr 24 23:11 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |                |                     |                     |
	| ssh     | -p kindnet-688825 sudo                               | kindnet-688825            | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:11 UTC | 10 Apr 24 23:11 UTC |
	|         | containerd config dump                               |                           |         |                |                     |                     |
	| ssh     | -p kindnet-688825 sudo                               | kindnet-688825            | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:11 UTC | 10 Apr 24 23:11 UTC |
	|         | systemctl status crio --all                          |                           |         |                |                     |                     |
	|         | --full --no-pager                                    |                           |         |                |                     |                     |
	| ssh     | -p kindnet-688825 sudo                               | kindnet-688825            | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:11 UTC | 10 Apr 24 23:11 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |                |                     |                     |
	| ssh     | -p kindnet-688825 sudo find                          | kindnet-688825            | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:11 UTC | 10 Apr 24 23:11 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |                |                     |                     |
	| ssh     | -p kindnet-688825 sudo crio                          | kindnet-688825            | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:11 UTC | 10 Apr 24 23:11 UTC |
	|         | config                                               |                           |         |                |                     |                     |
	| delete  | -p kindnet-688825                                    | kindnet-688825            | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:11 UTC | 10 Apr 24 23:11 UTC |
	| start   | -p enable-default-cni-688825                         | enable-default-cni-688825 | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:11 UTC |                     |
	|         | --memory=3072                                        |                           |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |                |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |                |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |                |                     |                     |
	|         | --driver=kvm2                                        |                           |         |                |                     |                     |
	|         | --container-runtime=crio                             |                           |         |                |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/10 23:11:38
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0410 23:11:38.189919   69967 out.go:291] Setting OutFile to fd 1 ...
	I0410 23:11:38.190038   69967 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 23:11:38.190052   69967 out.go:304] Setting ErrFile to fd 2...
	I0410 23:11:38.190057   69967 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 23:11:38.190239   69967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 23:11:38.190808   69967 out.go:298] Setting JSON to false
	I0410 23:11:38.191908   69967 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6841,"bootTime":1712783858,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0410 23:11:38.191967   69967 start.go:139] virtualization: kvm guest
	I0410 23:11:38.194039   69967 out.go:177] * [enable-default-cni-688825] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0410 23:11:38.195421   69967 out.go:177]   - MINIKUBE_LOCATION=18610
	I0410 23:11:38.195384   69967 notify.go:220] Checking for updates...
	I0410 23:11:38.196829   69967 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0410 23:11:38.198526   69967 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 23:11:38.200120   69967 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 23:11:38.201580   69967 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0410 23:11:38.202874   69967 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0410 23:11:38.204983   69967 config.go:182] Loaded profile config "calico-688825": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 23:11:38.205128   69967 config.go:182] Loaded profile config "custom-flannel-688825": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 23:11:38.205260   69967 config.go:182] Loaded profile config "default-k8s-diff-port-519831": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 23:11:38.205401   69967 driver.go:392] Setting default libvirt URI to qemu:///system
	I0410 23:11:38.244565   69967 out.go:177] * Using the kvm2 driver based on user configuration
	I0410 23:11:38.246061   69967 start.go:297] selected driver: kvm2
	I0410 23:11:38.246079   69967 start.go:901] validating driver "kvm2" against <nil>
	I0410 23:11:38.246091   69967 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0410 23:11:38.247072   69967 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 23:11:38.247183   69967 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18610-5679/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0410 23:11:38.263884   69967 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0410 23:11:38.263951   69967 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E0410 23:11:38.264139   69967 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0410 23:11:38.264187   69967 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 23:11:38.264244   69967 cni.go:84] Creating CNI manager for "bridge"
	I0410 23:11:38.264254   69967 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0410 23:11:38.264309   69967 start.go:340] cluster config:
	{Name:enable-default-cni-688825 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:enable-default-cni-688825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 23:11:38.264431   69967 iso.go:125] acquiring lock: {Name:mk5a268a8a4dbf02629446a098c1de60574dce10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 23:11:38.266256   69967 out.go:177] * Starting "enable-default-cni-688825" primary control-plane node in "enable-default-cni-688825" cluster
	I0410 23:11:36.004423   67722 node_ready.go:53] node "calico-688825" has status "Ready":"False"
	I0410 23:11:37.005859   67722 node_ready.go:49] node "calico-688825" has status "Ready":"True"
	I0410 23:11:37.005890   67722 node_ready.go:38] duration metric: took 10.505671758s for node "calico-688825" to be "Ready" ...
	I0410 23:11:37.005900   67722 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 23:11:37.018041   67722 pod_ready.go:78] waiting up to 15m0s for pod "calico-kube-controllers-787f445f84-k5p6d" in "kube-system" namespace to be "Ready" ...
	I0410 23:11:39.024876   67722 pod_ready.go:102] pod "calico-kube-controllers-787f445f84-k5p6d" in "kube-system" namespace has status "Ready":"False"
	I0410 23:11:37.954747   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:37.955265   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | unable to find current IP address of domain custom-flannel-688825 in network mk-custom-flannel-688825
	I0410 23:11:37.955288   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | I0410 23:11:37.955223   68441 retry.go:31] will retry after 4.764360458s: waiting for machine to come up
	I0410 23:11:38.267550   69967 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 23:11:38.267587   69967 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0410 23:11:38.267612   69967 cache.go:56] Caching tarball of preloaded images
	I0410 23:11:38.267704   69967 preload.go:173] Found /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0410 23:11:38.267717   69967 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0410 23:11:38.267819   69967 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/enable-default-cni-688825/config.json ...
	I0410 23:11:38.267841   69967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/enable-default-cni-688825/config.json: {Name:mk8fee91d004812b64222bab47dd57655efa221f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 23:11:38.267986   69967 start.go:360] acquireMachinesLock for enable-default-cni-688825: {Name:mkcfcfa8b01b63726303cd37d33f01d452118635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0410 23:11:41.025271   67722 pod_ready.go:102] pod "calico-kube-controllers-787f445f84-k5p6d" in "kube-system" namespace has status "Ready":"False"
	I0410 23:11:43.965646   67722 pod_ready.go:102] pod "calico-kube-controllers-787f445f84-k5p6d" in "kube-system" namespace has status "Ready":"False"
	I0410 23:11:47.395133   69967 start.go:364] duration metric: took 9.127119686s to acquireMachinesLock for "enable-default-cni-688825"
	I0410 23:11:47.395223   69967 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-688825 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.29.3 ClusterName:enable-default-cni-688825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0410 23:11:47.395358   69967 start.go:125] createHost starting for "" (driver="kvm2")
	I0410 23:11:42.721427   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:42.721986   68418 main.go:141] libmachine: (custom-flannel-688825) Found IP for machine: 192.168.39.9
	I0410 23:11:42.722016   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has current primary IP address 192.168.39.9 and MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:42.722029   68418 main.go:141] libmachine: (custom-flannel-688825) Reserving static IP address...
	I0410 23:11:42.722363   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | unable to find host DHCP lease matching {name: "custom-flannel-688825", mac: "52:54:00:a0:03:8c", ip: "192.168.39.9"} in network mk-custom-flannel-688825
	I0410 23:11:42.810901   68418 main.go:141] libmachine: (custom-flannel-688825) Reserved static IP address: 192.168.39.9
	I0410 23:11:42.810929   68418 main.go:141] libmachine: (custom-flannel-688825) Waiting for SSH to be available...
	I0410 23:11:42.810945   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | Getting to WaitForSSH function...
	I0410 23:11:42.813745   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:42.814054   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:a0:03:8c", ip: ""} in network mk-custom-flannel-688825
	I0410 23:11:42.814095   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | unable to find defined IP address of network mk-custom-flannel-688825 interface with MAC address 52:54:00:a0:03:8c
	I0410 23:11:42.814253   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | Using SSH client type: external
	I0410 23:11:42.814282   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | Using SSH private key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/custom-flannel-688825/id_rsa (-rw-------)
	I0410 23:11:42.814328   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18610-5679/.minikube/machines/custom-flannel-688825/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0410 23:11:42.814355   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | About to run SSH command:
	I0410 23:11:42.814370   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | exit 0
	I0410 23:11:42.819572   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | SSH cmd err, output: exit status 255: 
	I0410 23:11:42.819600   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0410 23:11:42.819612   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | command : exit 0
	I0410 23:11:42.819621   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | err     : exit status 255
	I0410 23:11:42.819638   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | output  : 
	I0410 23:11:45.821758   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | Getting to WaitForSSH function...
	I0410 23:11:45.824418   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:45.825004   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:03:8c", ip: ""} in network mk-custom-flannel-688825: {Iface:virbr3 ExpiryTime:2024-04-11 00:11:34 +0000 UTC Type:0 Mac:52:54:00:a0:03:8c Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:custom-flannel-688825 Clientid:01:52:54:00:a0:03:8c}
	I0410 23:11:45.825047   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined IP address 192.168.39.9 and MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:45.825129   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | Using SSH client type: external
	I0410 23:11:45.825156   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | Using SSH private key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/custom-flannel-688825/id_rsa (-rw-------)
	I0410 23:11:45.825195   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.9 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18610-5679/.minikube/machines/custom-flannel-688825/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0410 23:11:45.825209   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | About to run SSH command:
	I0410 23:11:45.825221   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | exit 0
	I0410 23:11:45.952567   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | SSH cmd err, output: <nil>: 
	I0410 23:11:45.952863   68418 main.go:141] libmachine: (custom-flannel-688825) KVM machine creation complete!
	I0410 23:11:45.953229   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetConfigRaw
	I0410 23:11:45.953761   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .DriverName
	I0410 23:11:45.953990   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .DriverName
	I0410 23:11:45.954193   68418 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0410 23:11:45.954216   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetState
	I0410 23:11:45.955537   68418 main.go:141] libmachine: Detecting operating system of created instance...
	I0410 23:11:45.955550   68418 main.go:141] libmachine: Waiting for SSH to be available...
	I0410 23:11:45.955568   68418 main.go:141] libmachine: Getting to WaitForSSH function...
	I0410 23:11:45.955575   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHHostname
	I0410 23:11:45.958020   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:45.958334   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:03:8c", ip: ""} in network mk-custom-flannel-688825: {Iface:virbr3 ExpiryTime:2024-04-11 00:11:34 +0000 UTC Type:0 Mac:52:54:00:a0:03:8c Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:custom-flannel-688825 Clientid:01:52:54:00:a0:03:8c}
	I0410 23:11:45.958361   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined IP address 192.168.39.9 and MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:45.958521   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHPort
	I0410 23:11:45.958677   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHKeyPath
	I0410 23:11:45.958792   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHKeyPath
	I0410 23:11:45.958984   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHUsername
	I0410 23:11:45.959145   68418 main.go:141] libmachine: Using SSH client type: native
	I0410 23:11:45.959379   68418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I0410 23:11:45.959393   68418 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0410 23:11:46.064056   68418 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 23:11:46.064082   68418 main.go:141] libmachine: Detecting the provisioner...
	I0410 23:11:46.064089   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHHostname
	I0410 23:11:46.066946   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:46.067349   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:03:8c", ip: ""} in network mk-custom-flannel-688825: {Iface:virbr3 ExpiryTime:2024-04-11 00:11:34 +0000 UTC Type:0 Mac:52:54:00:a0:03:8c Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:custom-flannel-688825 Clientid:01:52:54:00:a0:03:8c}
	I0410 23:11:46.067372   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined IP address 192.168.39.9 and MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:46.067562   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHPort
	I0410 23:11:46.067763   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHKeyPath
	I0410 23:11:46.067957   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHKeyPath
	I0410 23:11:46.068120   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHUsername
	I0410 23:11:46.068274   68418 main.go:141] libmachine: Using SSH client type: native
	I0410 23:11:46.068501   68418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I0410 23:11:46.068512   68418 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0410 23:11:46.177498   68418 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0410 23:11:46.177555   68418 main.go:141] libmachine: found compatible host: buildroot
	I0410 23:11:46.177564   68418 main.go:141] libmachine: Provisioning with buildroot...
	I0410 23:11:46.177573   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetMachineName
	I0410 23:11:46.177788   68418 buildroot.go:166] provisioning hostname "custom-flannel-688825"
	I0410 23:11:46.177811   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetMachineName
	I0410 23:11:46.178000   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHHostname
	I0410 23:11:46.180619   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:46.180994   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:03:8c", ip: ""} in network mk-custom-flannel-688825: {Iface:virbr3 ExpiryTime:2024-04-11 00:11:34 +0000 UTC Type:0 Mac:52:54:00:a0:03:8c Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:custom-flannel-688825 Clientid:01:52:54:00:a0:03:8c}
	I0410 23:11:46.181034   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined IP address 192.168.39.9 and MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:46.181167   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHPort
	I0410 23:11:46.181363   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHKeyPath
	I0410 23:11:46.181528   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHKeyPath
	I0410 23:11:46.181690   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHUsername
	I0410 23:11:46.181899   68418 main.go:141] libmachine: Using SSH client type: native
	I0410 23:11:46.182120   68418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I0410 23:11:46.182135   68418 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-688825 && echo "custom-flannel-688825" | sudo tee /etc/hostname
	I0410 23:11:46.313435   68418 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-688825
	
	I0410 23:11:46.313469   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHHostname
	I0410 23:11:46.316590   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:46.316973   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:03:8c", ip: ""} in network mk-custom-flannel-688825: {Iface:virbr3 ExpiryTime:2024-04-11 00:11:34 +0000 UTC Type:0 Mac:52:54:00:a0:03:8c Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:custom-flannel-688825 Clientid:01:52:54:00:a0:03:8c}
	I0410 23:11:46.317014   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined IP address 192.168.39.9 and MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:46.317176   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHPort
	I0410 23:11:46.317358   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHKeyPath
	I0410 23:11:46.317565   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHKeyPath
	I0410 23:11:46.317722   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHUsername
	I0410 23:11:46.317907   68418 main.go:141] libmachine: Using SSH client type: native
	I0410 23:11:46.318071   68418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I0410 23:11:46.318087   68418 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-688825' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-688825/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-688825' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 23:11:46.442359   68418 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 23:11:46.442395   68418 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 23:11:46.442421   68418 buildroot.go:174] setting up certificates
	I0410 23:11:46.442434   68418 provision.go:84] configureAuth start
	I0410 23:11:46.442449   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetMachineName
	I0410 23:11:46.442731   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetIP
	I0410 23:11:46.445548   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:46.445868   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:03:8c", ip: ""} in network mk-custom-flannel-688825: {Iface:virbr3 ExpiryTime:2024-04-11 00:11:34 +0000 UTC Type:0 Mac:52:54:00:a0:03:8c Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:custom-flannel-688825 Clientid:01:52:54:00:a0:03:8c}
	I0410 23:11:46.445890   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined IP address 192.168.39.9 and MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:46.446087   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHHostname
	I0410 23:11:46.448435   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:46.448776   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:03:8c", ip: ""} in network mk-custom-flannel-688825: {Iface:virbr3 ExpiryTime:2024-04-11 00:11:34 +0000 UTC Type:0 Mac:52:54:00:a0:03:8c Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:custom-flannel-688825 Clientid:01:52:54:00:a0:03:8c}
	I0410 23:11:46.448804   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined IP address 192.168.39.9 and MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:46.448903   68418 provision.go:143] copyHostCerts
	I0410 23:11:46.448967   68418 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 23:11:46.448984   68418 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 23:11:46.449052   68418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 23:11:46.449173   68418 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 23:11:46.449188   68418 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 23:11:46.449254   68418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 23:11:46.449350   68418 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 23:11:46.449359   68418 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 23:11:46.449388   68418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 23:11:46.449452   68418 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-688825 san=[127.0.0.1 192.168.39.9 custom-flannel-688825 localhost minikube]
	I0410 23:11:46.672120   68418 provision.go:177] copyRemoteCerts
	I0410 23:11:46.672192   68418 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 23:11:46.672215   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHHostname
	I0410 23:11:46.675190   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:46.675626   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:03:8c", ip: ""} in network mk-custom-flannel-688825: {Iface:virbr3 ExpiryTime:2024-04-11 00:11:34 +0000 UTC Type:0 Mac:52:54:00:a0:03:8c Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:custom-flannel-688825 Clientid:01:52:54:00:a0:03:8c}
	I0410 23:11:46.675656   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined IP address 192.168.39.9 and MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:46.675873   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHPort
	I0410 23:11:46.676094   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHKeyPath
	I0410 23:11:46.676269   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHUsername
	I0410 23:11:46.676468   68418 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/custom-flannel-688825/id_rsa Username:docker}
	I0410 23:11:46.762975   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0410 23:11:46.788448   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 23:11:46.813756   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0410 23:11:46.842142   68418 provision.go:87] duration metric: took 399.694881ms to configureAuth
	I0410 23:11:46.842166   68418 buildroot.go:189] setting minikube options for container-runtime
	I0410 23:11:46.842357   68418 config.go:182] Loaded profile config "custom-flannel-688825": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 23:11:46.842453   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHHostname
	I0410 23:11:46.845092   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:46.845426   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:03:8c", ip: ""} in network mk-custom-flannel-688825: {Iface:virbr3 ExpiryTime:2024-04-11 00:11:34 +0000 UTC Type:0 Mac:52:54:00:a0:03:8c Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:custom-flannel-688825 Clientid:01:52:54:00:a0:03:8c}
	I0410 23:11:46.845467   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined IP address 192.168.39.9 and MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:46.845601   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHPort
	I0410 23:11:46.845823   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHKeyPath
	I0410 23:11:46.846016   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHKeyPath
	I0410 23:11:46.846132   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHUsername
	I0410 23:11:46.846307   68418 main.go:141] libmachine: Using SSH client type: native
	I0410 23:11:46.846482   68418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I0410 23:11:46.846500   68418 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 23:11:47.140537   68418 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 23:11:47.140572   68418 main.go:141] libmachine: Checking connection to Docker...
	I0410 23:11:47.140583   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetURL
	I0410 23:11:47.141942   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | Using libvirt version 6000000
	I0410 23:11:47.143964   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:47.144467   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:03:8c", ip: ""} in network mk-custom-flannel-688825: {Iface:virbr3 ExpiryTime:2024-04-11 00:11:34 +0000 UTC Type:0 Mac:52:54:00:a0:03:8c Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:custom-flannel-688825 Clientid:01:52:54:00:a0:03:8c}
	I0410 23:11:47.144545   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined IP address 192.168.39.9 and MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:47.144679   68418 main.go:141] libmachine: Docker is up and running!
	I0410 23:11:47.144696   68418 main.go:141] libmachine: Reticulating splines...
	I0410 23:11:47.144707   68418 client.go:171] duration metric: took 29.609198188s to LocalClient.Create
	I0410 23:11:47.144729   68418 start.go:167] duration metric: took 29.609253602s to libmachine.API.Create "custom-flannel-688825"
	I0410 23:11:47.144742   68418 start.go:293] postStartSetup for "custom-flannel-688825" (driver="kvm2")
	I0410 23:11:47.144759   68418 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 23:11:47.144783   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .DriverName
	I0410 23:11:47.145008   68418 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 23:11:47.145030   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHHostname
	I0410 23:11:47.147218   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:47.147607   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:03:8c", ip: ""} in network mk-custom-flannel-688825: {Iface:virbr3 ExpiryTime:2024-04-11 00:11:34 +0000 UTC Type:0 Mac:52:54:00:a0:03:8c Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:custom-flannel-688825 Clientid:01:52:54:00:a0:03:8c}
	I0410 23:11:47.147627   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined IP address 192.168.39.9 and MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:47.147814   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHPort
	I0410 23:11:47.147992   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHKeyPath
	I0410 23:11:47.148147   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHUsername
	I0410 23:11:47.148299   68418 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/custom-flannel-688825/id_rsa Username:docker}
	I0410 23:11:47.231668   68418 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 23:11:47.236017   68418 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 23:11:47.236042   68418 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 23:11:47.236124   68418 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 23:11:47.236229   68418 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 23:11:47.236340   68418 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 23:11:47.246844   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 23:11:47.274146   68418 start.go:296] duration metric: took 129.386036ms for postStartSetup
	I0410 23:11:47.274199   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetConfigRaw
	I0410 23:11:47.274879   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetIP
	I0410 23:11:47.277759   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:47.278151   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:03:8c", ip: ""} in network mk-custom-flannel-688825: {Iface:virbr3 ExpiryTime:2024-04-11 00:11:34 +0000 UTC Type:0 Mac:52:54:00:a0:03:8c Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:custom-flannel-688825 Clientid:01:52:54:00:a0:03:8c}
	I0410 23:11:47.278196   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined IP address 192.168.39.9 and MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:47.278395   68418 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/custom-flannel-688825/config.json ...
	I0410 23:11:47.278564   68418 start.go:128] duration metric: took 29.763085762s to createHost
	I0410 23:11:47.278585   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHHostname
	I0410 23:11:47.281060   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:47.281452   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:03:8c", ip: ""} in network mk-custom-flannel-688825: {Iface:virbr3 ExpiryTime:2024-04-11 00:11:34 +0000 UTC Type:0 Mac:52:54:00:a0:03:8c Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:custom-flannel-688825 Clientid:01:52:54:00:a0:03:8c}
	I0410 23:11:47.281490   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined IP address 192.168.39.9 and MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:47.281722   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHPort
	I0410 23:11:47.281919   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHKeyPath
	I0410 23:11:47.282098   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHKeyPath
	I0410 23:11:47.282246   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHUsername
	I0410 23:11:47.282409   68418 main.go:141] libmachine: Using SSH client type: native
	I0410 23:11:47.282623   68418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I0410 23:11:47.282640   68418 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 23:11:47.394976   68418 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712790707.383575640
	
	I0410 23:11:47.395001   68418 fix.go:216] guest clock: 1712790707.383575640
	I0410 23:11:47.395011   68418 fix.go:229] Guest: 2024-04-10 23:11:47.38357564 +0000 UTC Remote: 2024-04-10 23:11:47.278576153 +0000 UTC m=+29.893891865 (delta=104.999487ms)
	I0410 23:11:47.395035   68418 fix.go:200] guest clock delta is within tolerance: 104.999487ms
	I0410 23:11:47.395041   68418 start.go:83] releasing machines lock for "custom-flannel-688825", held for 29.879668172s
	I0410 23:11:47.395071   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .DriverName
	I0410 23:11:47.395418   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetIP
	I0410 23:11:47.398197   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:47.398598   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:03:8c", ip: ""} in network mk-custom-flannel-688825: {Iface:virbr3 ExpiryTime:2024-04-11 00:11:34 +0000 UTC Type:0 Mac:52:54:00:a0:03:8c Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:custom-flannel-688825 Clientid:01:52:54:00:a0:03:8c}
	I0410 23:11:47.398622   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined IP address 192.168.39.9 and MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:47.398795   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .DriverName
	I0410 23:11:47.399522   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .DriverName
	I0410 23:11:47.399755   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .DriverName
	I0410 23:11:47.399869   68418 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 23:11:47.399938   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHHostname
	I0410 23:11:47.399961   68418 ssh_runner.go:195] Run: cat /version.json
	I0410 23:11:47.399989   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHHostname
	I0410 23:11:47.402933   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:47.403228   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:47.403369   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:03:8c", ip: ""} in network mk-custom-flannel-688825: {Iface:virbr3 ExpiryTime:2024-04-11 00:11:34 +0000 UTC Type:0 Mac:52:54:00:a0:03:8c Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:custom-flannel-688825 Clientid:01:52:54:00:a0:03:8c}
	I0410 23:11:47.403390   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined IP address 192.168.39.9 and MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:47.403553   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHPort
	I0410 23:11:47.403715   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:03:8c", ip: ""} in network mk-custom-flannel-688825: {Iface:virbr3 ExpiryTime:2024-04-11 00:11:34 +0000 UTC Type:0 Mac:52:54:00:a0:03:8c Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:custom-flannel-688825 Clientid:01:52:54:00:a0:03:8c}
	I0410 23:11:47.403734   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHKeyPath
	I0410 23:11:47.403790   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined IP address 192.168.39.9 and MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:47.404003   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHUsername
	I0410 23:11:47.404006   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHPort
	I0410 23:11:47.404231   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHKeyPath
	I0410 23:11:47.404235   68418 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/custom-flannel-688825/id_rsa Username:docker}
	I0410 23:11:47.404418   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetSSHUsername
	I0410 23:11:47.404561   68418 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/custom-flannel-688825/id_rsa Username:docker}
	I0410 23:11:47.397370   69967 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0410 23:11:47.397553   69967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 23:11:47.397605   69967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 23:11:47.417455   69967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45623
	I0410 23:11:47.418036   69967 main.go:141] libmachine: () Calling .GetVersion
	I0410 23:11:47.418737   69967 main.go:141] libmachine: Using API Version  1
	I0410 23:11:47.418760   69967 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 23:11:47.419061   69967 main.go:141] libmachine: () Calling .GetMachineName
	I0410 23:11:47.419253   69967 main.go:141] libmachine: (enable-default-cni-688825) Calling .GetMachineName
	I0410 23:11:47.419414   69967 main.go:141] libmachine: (enable-default-cni-688825) Calling .DriverName
	I0410 23:11:47.419560   69967 start.go:159] libmachine.API.Create for "enable-default-cni-688825" (driver="kvm2")
	I0410 23:11:47.419587   69967 client.go:168] LocalClient.Create starting
	I0410 23:11:47.419620   69967 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem
	I0410 23:11:47.419668   69967 main.go:141] libmachine: Decoding PEM data...
	I0410 23:11:47.419690   69967 main.go:141] libmachine: Parsing certificate...
	I0410 23:11:47.419753   69967 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem
	I0410 23:11:47.419785   69967 main.go:141] libmachine: Decoding PEM data...
	I0410 23:11:47.419801   69967 main.go:141] libmachine: Parsing certificate...
	I0410 23:11:47.419834   69967 main.go:141] libmachine: Running pre-create checks...
	I0410 23:11:47.419846   69967 main.go:141] libmachine: (enable-default-cni-688825) Calling .PreCreateCheck
	I0410 23:11:47.420173   69967 main.go:141] libmachine: (enable-default-cni-688825) Calling .GetConfigRaw
	I0410 23:11:47.420682   69967 main.go:141] libmachine: Creating machine...
	I0410 23:11:47.420701   69967 main.go:141] libmachine: (enable-default-cni-688825) Calling .Create
	I0410 23:11:47.420841   69967 main.go:141] libmachine: (enable-default-cni-688825) Creating KVM machine...
	I0410 23:11:47.422235   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | found existing default KVM network
	I0410 23:11:47.423682   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | I0410 23:11:47.423512   70049 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:53:5a:17} reservation:<nil>}
	I0410 23:11:47.424659   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | I0410 23:11:47.424563   70049 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:ce:10:00} reservation:<nil>}
	I0410 23:11:47.425946   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | I0410 23:11:47.425854   70049 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00038ea50}
	I0410 23:11:47.425971   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | created network xml: 
	I0410 23:11:47.425983   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | <network>
	I0410 23:11:47.425996   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG |   <name>mk-enable-default-cni-688825</name>
	I0410 23:11:47.426009   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG |   <dns enable='no'/>
	I0410 23:11:47.426023   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG |   
	I0410 23:11:47.426033   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0410 23:11:47.426041   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG |     <dhcp>
	I0410 23:11:47.426050   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0410 23:11:47.426059   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG |     </dhcp>
	I0410 23:11:47.426066   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG |   </ip>
	I0410 23:11:47.426074   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG |   
	I0410 23:11:47.426081   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | </network>
	I0410 23:11:47.426091   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | 
	I0410 23:11:47.431829   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | trying to create private KVM network mk-enable-default-cni-688825 192.168.61.0/24...
	I0410 23:11:47.512648   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | private KVM network mk-enable-default-cni-688825 192.168.61.0/24 created
	I0410 23:11:47.512687   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | I0410 23:11:47.509962   70049 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 23:11:47.512701   69967 main.go:141] libmachine: (enable-default-cni-688825) Setting up store path in /home/jenkins/minikube-integration/18610-5679/.minikube/machines/enable-default-cni-688825 ...
	I0410 23:11:47.512717   69967 main.go:141] libmachine: (enable-default-cni-688825) Building disk image from file:///home/jenkins/minikube-integration/18610-5679/.minikube/cache/iso/amd64/minikube-v1.33.0-1712743565-18610-amd64.iso
	I0410 23:11:47.512734   69967 main.go:141] libmachine: (enable-default-cni-688825) Downloading /home/jenkins/minikube-integration/18610-5679/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18610-5679/.minikube/cache/iso/amd64/minikube-v1.33.0-1712743565-18610-amd64.iso...
	I0410 23:11:47.741089   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | I0410 23:11:47.740962   70049 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/enable-default-cni-688825/id_rsa...
	I0410 23:11:47.832774   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | I0410 23:11:47.832632   70049 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/enable-default-cni-688825/enable-default-cni-688825.rawdisk...
	I0410 23:11:47.832824   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | Writing magic tar header
	I0410 23:11:47.832856   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | Writing SSH key tar header
	I0410 23:11:47.832887   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | I0410 23:11:47.832758   70049 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18610-5679/.minikube/machines/enable-default-cni-688825 ...
	I0410 23:11:47.832969   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/enable-default-cni-688825
	I0410 23:11:47.832989   69967 main.go:141] libmachine: (enable-default-cni-688825) Setting executable bit set on /home/jenkins/minikube-integration/18610-5679/.minikube/machines/enable-default-cni-688825 (perms=drwx------)
	I0410 23:11:47.833006   69967 main.go:141] libmachine: (enable-default-cni-688825) Setting executable bit set on /home/jenkins/minikube-integration/18610-5679/.minikube/machines (perms=drwxr-xr-x)
	I0410 23:11:47.833051   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18610-5679/.minikube/machines
	I0410 23:11:47.833068   69967 main.go:141] libmachine: (enable-default-cni-688825) Setting executable bit set on /home/jenkins/minikube-integration/18610-5679/.minikube (perms=drwxr-xr-x)
	I0410 23:11:47.833082   69967 main.go:141] libmachine: (enable-default-cni-688825) Setting executable bit set on /home/jenkins/minikube-integration/18610-5679 (perms=drwxrwxr-x)
	I0410 23:11:47.833097   69967 main.go:141] libmachine: (enable-default-cni-688825) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0410 23:11:47.833111   69967 main.go:141] libmachine: (enable-default-cni-688825) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0410 23:11:47.833120   69967 main.go:141] libmachine: (enable-default-cni-688825) Creating domain...
	I0410 23:11:47.833201   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 23:11:47.833237   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18610-5679
	I0410 23:11:47.833255   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0410 23:11:47.833282   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | Checking permissions on dir: /home/jenkins
	I0410 23:11:47.833297   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | Checking permissions on dir: /home
	I0410 23:11:47.833310   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | Skipping /home - not owner
	I0410 23:11:47.834345   69967 main.go:141] libmachine: (enable-default-cni-688825) define libvirt domain using xml: 
	I0410 23:11:47.834385   69967 main.go:141] libmachine: (enable-default-cni-688825) <domain type='kvm'>
	I0410 23:11:47.834403   69967 main.go:141] libmachine: (enable-default-cni-688825)   <name>enable-default-cni-688825</name>
	I0410 23:11:47.834412   69967 main.go:141] libmachine: (enable-default-cni-688825)   <memory unit='MiB'>3072</memory>
	I0410 23:11:47.834425   69967 main.go:141] libmachine: (enable-default-cni-688825)   <vcpu>2</vcpu>
	I0410 23:11:47.834437   69967 main.go:141] libmachine: (enable-default-cni-688825)   <features>
	I0410 23:11:47.834450   69967 main.go:141] libmachine: (enable-default-cni-688825)     <acpi/>
	I0410 23:11:47.834465   69967 main.go:141] libmachine: (enable-default-cni-688825)     <apic/>
	I0410 23:11:47.834478   69967 main.go:141] libmachine: (enable-default-cni-688825)     <pae/>
	I0410 23:11:47.834485   69967 main.go:141] libmachine: (enable-default-cni-688825)     
	I0410 23:11:47.834493   69967 main.go:141] libmachine: (enable-default-cni-688825)   </features>
	I0410 23:11:47.834501   69967 main.go:141] libmachine: (enable-default-cni-688825)   <cpu mode='host-passthrough'>
	I0410 23:11:47.834508   69967 main.go:141] libmachine: (enable-default-cni-688825)   
	I0410 23:11:47.834515   69967 main.go:141] libmachine: (enable-default-cni-688825)   </cpu>
	I0410 23:11:47.834523   69967 main.go:141] libmachine: (enable-default-cni-688825)   <os>
	I0410 23:11:47.834531   69967 main.go:141] libmachine: (enable-default-cni-688825)     <type>hvm</type>
	I0410 23:11:47.834540   69967 main.go:141] libmachine: (enable-default-cni-688825)     <boot dev='cdrom'/>
	I0410 23:11:47.834547   69967 main.go:141] libmachine: (enable-default-cni-688825)     <boot dev='hd'/>
	I0410 23:11:47.834557   69967 main.go:141] libmachine: (enable-default-cni-688825)     <bootmenu enable='no'/>
	I0410 23:11:47.834564   69967 main.go:141] libmachine: (enable-default-cni-688825)   </os>
	I0410 23:11:47.834573   69967 main.go:141] libmachine: (enable-default-cni-688825)   <devices>
	I0410 23:11:47.834581   69967 main.go:141] libmachine: (enable-default-cni-688825)     <disk type='file' device='cdrom'>
	I0410 23:11:47.834601   69967 main.go:141] libmachine: (enable-default-cni-688825)       <source file='/home/jenkins/minikube-integration/18610-5679/.minikube/machines/enable-default-cni-688825/boot2docker.iso'/>
	I0410 23:11:47.834611   69967 main.go:141] libmachine: (enable-default-cni-688825)       <target dev='hdc' bus='scsi'/>
	I0410 23:11:47.834619   69967 main.go:141] libmachine: (enable-default-cni-688825)       <readonly/>
	I0410 23:11:47.834626   69967 main.go:141] libmachine: (enable-default-cni-688825)     </disk>
	I0410 23:11:47.834637   69967 main.go:141] libmachine: (enable-default-cni-688825)     <disk type='file' device='disk'>
	I0410 23:11:47.834647   69967 main.go:141] libmachine: (enable-default-cni-688825)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0410 23:11:47.834662   69967 main.go:141] libmachine: (enable-default-cni-688825)       <source file='/home/jenkins/minikube-integration/18610-5679/.minikube/machines/enable-default-cni-688825/enable-default-cni-688825.rawdisk'/>
	I0410 23:11:47.834671   69967 main.go:141] libmachine: (enable-default-cni-688825)       <target dev='hda' bus='virtio'/>
	I0410 23:11:47.834680   69967 main.go:141] libmachine: (enable-default-cni-688825)     </disk>
	I0410 23:11:47.834688   69967 main.go:141] libmachine: (enable-default-cni-688825)     <interface type='network'>
	I0410 23:11:47.834699   69967 main.go:141] libmachine: (enable-default-cni-688825)       <source network='mk-enable-default-cni-688825'/>
	I0410 23:11:47.834708   69967 main.go:141] libmachine: (enable-default-cni-688825)       <model type='virtio'/>
	I0410 23:11:47.834716   69967 main.go:141] libmachine: (enable-default-cni-688825)     </interface>
	I0410 23:11:47.834724   69967 main.go:141] libmachine: (enable-default-cni-688825)     <interface type='network'>
	I0410 23:11:47.834735   69967 main.go:141] libmachine: (enable-default-cni-688825)       <source network='default'/>
	I0410 23:11:47.834752   69967 main.go:141] libmachine: (enable-default-cni-688825)       <model type='virtio'/>
	I0410 23:11:47.834762   69967 main.go:141] libmachine: (enable-default-cni-688825)     </interface>
	I0410 23:11:47.834770   69967 main.go:141] libmachine: (enable-default-cni-688825)     <serial type='pty'>
	I0410 23:11:47.834779   69967 main.go:141] libmachine: (enable-default-cni-688825)       <target port='0'/>
	I0410 23:11:47.834786   69967 main.go:141] libmachine: (enable-default-cni-688825)     </serial>
	I0410 23:11:47.834795   69967 main.go:141] libmachine: (enable-default-cni-688825)     <console type='pty'>
	I0410 23:11:47.834804   69967 main.go:141] libmachine: (enable-default-cni-688825)       <target type='serial' port='0'/>
	I0410 23:11:47.834815   69967 main.go:141] libmachine: (enable-default-cni-688825)     </console>
	I0410 23:11:47.834823   69967 main.go:141] libmachine: (enable-default-cni-688825)     <rng model='virtio'>
	I0410 23:11:47.834833   69967 main.go:141] libmachine: (enable-default-cni-688825)       <backend model='random'>/dev/random</backend>
	I0410 23:11:47.834841   69967 main.go:141] libmachine: (enable-default-cni-688825)     </rng>
	I0410 23:11:47.834850   69967 main.go:141] libmachine: (enable-default-cni-688825)     
	I0410 23:11:47.834857   69967 main.go:141] libmachine: (enable-default-cni-688825)     
	I0410 23:11:47.834865   69967 main.go:141] libmachine: (enable-default-cni-688825)   </devices>
	I0410 23:11:47.834872   69967 main.go:141] libmachine: (enable-default-cni-688825) </domain>
	I0410 23:11:47.834883   69967 main.go:141] libmachine: (enable-default-cni-688825) 
	I0410 23:11:47.840445   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | domain enable-default-cni-688825 has defined MAC address 52:54:00:64:2d:4f in network default
	I0410 23:11:47.841239   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | domain enable-default-cni-688825 has defined MAC address 52:54:00:6f:d8:6a in network mk-enable-default-cni-688825
	I0410 23:11:47.841276   69967 main.go:141] libmachine: (enable-default-cni-688825) Ensuring networks are active...
	I0410 23:11:47.842168   69967 main.go:141] libmachine: (enable-default-cni-688825) Ensuring network default is active
	I0410 23:11:47.842630   69967 main.go:141] libmachine: (enable-default-cni-688825) Ensuring network mk-enable-default-cni-688825 is active
	I0410 23:11:47.843464   69967 main.go:141] libmachine: (enable-default-cni-688825) Getting domain xml...
	I0410 23:11:47.844492   69967 main.go:141] libmachine: (enable-default-cni-688825) Creating domain...
	I0410 23:11:47.494317   68418 ssh_runner.go:195] Run: systemctl --version
	I0410 23:11:47.529278   68418 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 23:11:47.696090   68418 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 23:11:47.703390   68418 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 23:11:47.703460   68418 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 23:11:47.722057   68418 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0410 23:11:47.722080   68418 start.go:494] detecting cgroup driver to use...
	I0410 23:11:47.722135   68418 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 23:11:47.739043   68418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 23:11:47.754237   68418 docker.go:217] disabling cri-docker service (if available) ...
	I0410 23:11:47.754304   68418 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 23:11:47.769012   68418 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 23:11:47.784861   68418 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 23:11:47.919902   68418 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 23:11:48.120796   68418 docker.go:233] disabling docker service ...
	I0410 23:11:48.120885   68418 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 23:11:48.142325   68418 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 23:11:48.159235   68418 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 23:11:48.322367   68418 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 23:11:48.465388   68418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 23:11:48.493138   68418 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 23:11:48.517945   68418 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0410 23:11:48.518008   68418 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 23:11:48.533778   68418 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 23:11:48.533856   68418 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 23:11:48.549851   68418 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 23:11:48.565531   68418 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 23:11:48.578107   68418 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 23:11:48.592719   68418 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 23:11:48.604272   68418 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 23:11:48.624074   68418 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 23:11:48.635102   68418 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 23:11:48.645280   68418 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0410 23:11:48.645347   68418 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0410 23:11:48.659796   68418 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 23:11:48.672902   68418 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 23:11:48.812095   68418 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 23:11:48.971516   68418 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 23:11:48.971589   68418 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 23:11:48.978708   68418 start.go:562] Will wait 60s for crictl version
	I0410 23:11:48.978769   68418 ssh_runner.go:195] Run: which crictl
	I0410 23:11:48.983326   68418 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 23:11:49.026991   68418 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 23:11:49.027146   68418 ssh_runner.go:195] Run: crio --version
	I0410 23:11:49.061450   68418 ssh_runner.go:195] Run: crio --version
	I0410 23:11:49.096858   68418 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0410 23:11:46.026601   67722 pod_ready.go:102] pod "calico-kube-controllers-787f445f84-k5p6d" in "kube-system" namespace has status "Ready":"False"
	I0410 23:11:48.027686   67722 pod_ready.go:102] pod "calico-kube-controllers-787f445f84-k5p6d" in "kube-system" namespace has status "Ready":"False"
	I0410 23:11:50.534453   67722 pod_ready.go:102] pod "calico-kube-controllers-787f445f84-k5p6d" in "kube-system" namespace has status "Ready":"False"
	I0410 23:11:49.098320   68418 main.go:141] libmachine: (custom-flannel-688825) Calling .GetIP
	I0410 23:11:49.102047   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:49.102609   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:03:8c", ip: ""} in network mk-custom-flannel-688825: {Iface:virbr3 ExpiryTime:2024-04-11 00:11:34 +0000 UTC Type:0 Mac:52:54:00:a0:03:8c Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:custom-flannel-688825 Clientid:01:52:54:00:a0:03:8c}
	I0410 23:11:49.102634   68418 main.go:141] libmachine: (custom-flannel-688825) DBG | domain custom-flannel-688825 has defined IP address 192.168.39.9 and MAC address 52:54:00:a0:03:8c in network mk-custom-flannel-688825
	I0410 23:11:49.102928   68418 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0410 23:11:49.108349   68418 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 23:11:49.125563   68418 kubeadm.go:877] updating cluster {Name:custom-flannel-688825 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.3 ClusterName:custom-flannel-688825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 23:11:49.125695   68418 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 23:11:49.125740   68418 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 23:11:49.175093   68418 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0410 23:11:49.175163   68418 ssh_runner.go:195] Run: which lz4
	I0410 23:11:49.179712   68418 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0410 23:11:49.185677   68418 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0410 23:11:49.185707   68418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0410 23:11:50.880628   68418 crio.go:462] duration metric: took 1.700960312s to copy over tarball
	I0410 23:11:50.880705   68418 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0410 23:11:49.173770   69967 main.go:141] libmachine: (enable-default-cni-688825) Waiting to get IP...
	I0410 23:11:49.174800   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | domain enable-default-cni-688825 has defined MAC address 52:54:00:6f:d8:6a in network mk-enable-default-cni-688825
	I0410 23:11:49.175415   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | unable to find current IP address of domain enable-default-cni-688825 in network mk-enable-default-cni-688825
	I0410 23:11:49.175450   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | I0410 23:11:49.175335   70049 retry.go:31] will retry after 275.158936ms: waiting for machine to come up
	I0410 23:11:49.452190   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | domain enable-default-cni-688825 has defined MAC address 52:54:00:6f:d8:6a in network mk-enable-default-cni-688825
	I0410 23:11:49.452768   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | unable to find current IP address of domain enable-default-cni-688825 in network mk-enable-default-cni-688825
	I0410 23:11:49.452799   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | I0410 23:11:49.452725   70049 retry.go:31] will retry after 379.824985ms: waiting for machine to come up
	I0410 23:11:49.834586   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | domain enable-default-cni-688825 has defined MAC address 52:54:00:6f:d8:6a in network mk-enable-default-cni-688825
	I0410 23:11:49.835196   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | unable to find current IP address of domain enable-default-cni-688825 in network mk-enable-default-cni-688825
	I0410 23:11:49.835225   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | I0410 23:11:49.835116   70049 retry.go:31] will retry after 410.923827ms: waiting for machine to come up
	I0410 23:11:50.247909   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | domain enable-default-cni-688825 has defined MAC address 52:54:00:6f:d8:6a in network mk-enable-default-cni-688825
	I0410 23:11:50.248825   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | unable to find current IP address of domain enable-default-cni-688825 in network mk-enable-default-cni-688825
	I0410 23:11:50.248856   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | I0410 23:11:50.248778   70049 retry.go:31] will retry after 461.664719ms: waiting for machine to come up
	I0410 23:11:50.712803   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | domain enable-default-cni-688825 has defined MAC address 52:54:00:6f:d8:6a in network mk-enable-default-cni-688825
	I0410 23:11:50.713314   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | unable to find current IP address of domain enable-default-cni-688825 in network mk-enable-default-cni-688825
	I0410 23:11:50.713350   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | I0410 23:11:50.713228   70049 retry.go:31] will retry after 655.469175ms: waiting for machine to come up
	I0410 23:11:51.370778   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | domain enable-default-cni-688825 has defined MAC address 52:54:00:6f:d8:6a in network mk-enable-default-cni-688825
	I0410 23:11:51.371290   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | unable to find current IP address of domain enable-default-cni-688825 in network mk-enable-default-cni-688825
	I0410 23:11:51.371313   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | I0410 23:11:51.371249   70049 retry.go:31] will retry after 711.568247ms: waiting for machine to come up
	I0410 23:11:52.084077   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | domain enable-default-cni-688825 has defined MAC address 52:54:00:6f:d8:6a in network mk-enable-default-cni-688825
	I0410 23:11:52.084597   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | unable to find current IP address of domain enable-default-cni-688825 in network mk-enable-default-cni-688825
	I0410 23:11:52.084632   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | I0410 23:11:52.084539   70049 retry.go:31] will retry after 909.503915ms: waiting for machine to come up
	I0410 23:11:52.995725   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | domain enable-default-cni-688825 has defined MAC address 52:54:00:6f:d8:6a in network mk-enable-default-cni-688825
	I0410 23:11:52.996267   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | unable to find current IP address of domain enable-default-cni-688825 in network mk-enable-default-cni-688825
	I0410 23:11:52.996296   69967 main.go:141] libmachine: (enable-default-cni-688825) DBG | I0410 23:11:52.996202   70049 retry.go:31] will retry after 994.488607ms: waiting for machine to come up
	
	
	==> CRI-O <==
	Apr 10 23:11:56 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:11:56.160707090Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a3a388381d1b5b3faacc34b89b54e0a12b7c8f80299767ba86d54d9a14c50050,Metadata:&PodSandboxMetadata{Name:busybox,Uid:3238c558-cbd6-4f3a-b18c-01cf4b1b9d0c,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712789368978782351,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3238c558-cbd6-4f3a-b18c-01cf4b1b9d0c,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-10T22:49:21.134596164Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e8068ec9c3c4f3650ac51ea3733b91d94bec34626d668d72d72ec69c59563d9d,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-ghnvx,Uid:88ebd9b0-ecf0-4037-b5b0-547dad2354ba,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:171278
9368968713534,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-ghnvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88ebd9b0-ecf0-4037-b5b0-547dad2354ba,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-10T22:49:21.134587351Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0454883ef98cafab75880275ba90800e4a3658c3be47cc4b1010269f9628b89e,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-9l2hc,Uid:2f5cda2f-4d8f-4798-954e-5ef588f2b88f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712789367169099010,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-9l2hc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f5cda2f-4d8f-4798-954e-5ef588f2b88f,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-10
T22:49:21.134600233Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c397ba3b09882ccc1c830123edfe2babba2ead7db84fddb462ad7ec92d39efbf,Metadata:&PodSandboxMetadata{Name:kube-proxy-5mbwx,Uid:44724487-9539-4079-9fd6-40cb70208b95,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712789361461062558,Labels:map[string]string{controller-revision-hash: 7659797656,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-5mbwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44724487-9539-4079-9fd6-40cb70208b95,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-10T22:49:21.134599230Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bfca7f6e83b9d3cb6a78e30a6f9cc6c871b892275a04c98d872519b18eb5f674,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e4e09f42-54ba-480e-a020-1ca071a54558,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712789361447388876,Labels:
map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4e09f42-54ba-480e-a020-1ca071a54558,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,ku
bernetes.io/config.seen: 2024-04-10T22:49:21.134594768Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:43e313fbc0995dd76558baf805ab503e2074e02a850714fac77905d3afadddb1,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-519831,Uid:863ed51eb16fa172b74df541a53ae3ab,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712789356610835677,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 863ed51eb16fa172b74df541a53ae3ab,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.170:8444,kubernetes.io/config.hash: 863ed51eb16fa172b74df541a53ae3ab,kubernetes.io/config.seen: 2024-04-10T22:49:16.126709948Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ddb7b6f14e3c7a6f45aea5165980feb35944ee67c926ccf9b6f710b0b43927
73,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-519831,Uid:ccc50e24580ad579db03e5cd167e7fa1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712789356609597911,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccc50e24580ad579db03e5cd167e7fa1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.170:2379,kubernetes.io/config.hash: ccc50e24580ad579db03e5cd167e7fa1,kubernetes.io/config.seen: 2024-04-10T22:49:16.126705966Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:46bd4334b1632938661e837855bc6ad1ef771620f76d494a084f53a7d4809179,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-519831,Uid:9929847901461a760df7cd55eacdb8ba,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712789356588799651,Labels:map[string]str
ing{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9929847901461a760df7cd55eacdb8ba,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9929847901461a760df7cd55eacdb8ba,kubernetes.io/config.seen: 2024-04-10T22:49:16.126711943Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f06217fe54c3ae56d250a3d9d36b24c714c597e793a37a70b89a989b51b08918,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-519831,Uid:97afe0be93fc66092f9b2a5325da352b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712789356586806431,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97afe0be93fc66092f9b2a5325da352b,tier: control-p
lane,},Annotations:map[string]string{kubernetes.io/config.hash: 97afe0be93fc66092f9b2a5325da352b,kubernetes.io/config.seen: 2024-04-10T22:49:16.126711193Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=65947618-2df5-417a-bc54-50c5ac6556fd name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 10 23:11:56 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:11:56.161814995Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=786832cd-784f-4a22-a56a-59d17531858c name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:11:56 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:11:56.161884646Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=786832cd-784f-4a22-a56a-59d17531858c name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:11:56 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:11:56.162898541Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195,PodSandboxId:bfca7f6e83b9d3cb6a78e30a6f9cc6c871b892275a04c98d872519b18eb5f674,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712789392385667554,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4e09f42-54ba-480e-a020-1ca071a54558,},Annotations:map[string]string{io.kubernetes.container.hash: 160f84da,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac1dc13f36ea7a5ded1554b7c6697e0987fd40c7ebf17cca475ec8b0b8cfed81,PodSandboxId:a3a388381d1b5b3faacc34b89b54e0a12b7c8f80299767ba86d54d9a14c50050,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1712789371889578822,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3238c558-cbd6-4f3a-b18c-01cf4b1b9d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 320f878f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3,PodSandboxId:e8068ec9c3c4f3650ac51ea3733b91d94bec34626d668d72d72ec69c59563d9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789369289746081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ghnvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88ebd9b0-ecf0-4037-b5b0-547dad2354ba,},Annotations:map[string]string{io.kubernetes.container.hash: e4f85df5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b,PodSandboxId:c397ba3b09882ccc1c830123edfe2babba2ead7db84fddb462ad7ec92d39efbf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712789361635283392,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5mbwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44724487-9
539-4079-9fd6-40cb70208b95,},Annotations:map[string]string{io.kubernetes.container.hash: 3db0b90,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14,PodSandboxId:46bd4334b1632938661e837855bc6ad1ef771620f76d494a084f53a7d4809179,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712789356922394650,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 992984790
1461a760df7cd55eacdb8ba,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072,PodSandboxId:ddb7b6f14e3c7a6f45aea5165980feb35944ee67c926ccf9b6f710b0b4392773,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712789356871638389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccc50e24580ad579db03e5cd167e7fa1,},Annota
tions:map[string]string{io.kubernetes.container.hash: d017430f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c,PodSandboxId:43e313fbc0995dd76558baf805ab503e2074e02a850714fac77905d3afadddb1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712789356842309835,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 863ed51eb16fa172b74df541a53ae3ab,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 8c521e92,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39,PodSandboxId:f06217fe54c3ae56d250a3d9d36b24c714c597e793a37a70b89a989b51b08918,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712789356764603205,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97afe0be93fc66092f
9b2a5325da352b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=786832cd-784f-4a22-a56a-59d17531858c name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:11:56 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:11:56.174676268Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=afc7145e-6560-4be8-8d75-b9db716e0d67 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:11:56 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:11:56.174868464Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=afc7145e-6560-4be8-8d75-b9db716e0d67 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:11:56 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:11:56.176225580Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=92d4cf1a-ba3e-4d1f-9452-585c6a7c10ee name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:11:56 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:11:56.176752027Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712790716176725830,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=92d4cf1a-ba3e-4d1f-9452-585c6a7c10ee name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:11:56 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:11:56.178521041Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f6e88d90-ce38-4cc3-9f2d-df6055864331 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:11:56 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:11:56.178791014Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f6e88d90-ce38-4cc3-9f2d-df6055864331 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:11:56 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:11:56.179719327Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195,PodSandboxId:bfca7f6e83b9d3cb6a78e30a6f9cc6c871b892275a04c98d872519b18eb5f674,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712789392385667554,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4e09f42-54ba-480e-a020-1ca071a54558,},Annotations:map[string]string{io.kubernetes.container.hash: 160f84da,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac1dc13f36ea7a5ded1554b7c6697e0987fd40c7ebf17cca475ec8b0b8cfed81,PodSandboxId:a3a388381d1b5b3faacc34b89b54e0a12b7c8f80299767ba86d54d9a14c50050,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1712789371889578822,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3238c558-cbd6-4f3a-b18c-01cf4b1b9d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 320f878f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3,PodSandboxId:e8068ec9c3c4f3650ac51ea3733b91d94bec34626d668d72d72ec69c59563d9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789369289746081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ghnvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88ebd9b0-ecf0-4037-b5b0-547dad2354ba,},Annotations:map[string]string{io.kubernetes.container.hash: e4f85df5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7,PodSandboxId:bfca7f6e83b9d3cb6a78e30a6f9cc6c871b892275a04c98d872519b18eb5f674,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712789361639886058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e4e09f42-54ba-480e-a020-1ca071a54558,},Annotations:map[string]string{io.kubernetes.container.hash: 160f84da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b,PodSandboxId:c397ba3b09882ccc1c830123edfe2babba2ead7db84fddb462ad7ec92d39efbf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712789361635283392,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5mbwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44724487-9539-4079-9fd6
-40cb70208b95,},Annotations:map[string]string{io.kubernetes.container.hash: 3db0b90,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14,PodSandboxId:46bd4334b1632938661e837855bc6ad1ef771620f76d494a084f53a7d4809179,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712789356922394650,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9929847901461a760df7cd
55eacdb8ba,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072,PodSandboxId:ddb7b6f14e3c7a6f45aea5165980feb35944ee67c926ccf9b6f710b0b4392773,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712789356871638389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccc50e24580ad579db03e5cd167e7fa1,},Annotations:map[str
ing]string{io.kubernetes.container.hash: d017430f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c,PodSandboxId:43e313fbc0995dd76558baf805ab503e2074e02a850714fac77905d3afadddb1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712789356842309835,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 863ed51eb16fa172b74df541a53ae3ab,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 8c521e92,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39,PodSandboxId:f06217fe54c3ae56d250a3d9d36b24c714c597e793a37a70b89a989b51b08918,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712789356764603205,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97afe0be93fc66092f9b2a5325da352
b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f6e88d90-ce38-4cc3-9f2d-df6055864331 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:11:56 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:11:56.223543852Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=215cd807-83df-4a7b-bc0e-602498b44f5a name=/runtime.v1.RuntimeService/Version
	Apr 10 23:11:56 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:11:56.223658010Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=215cd807-83df-4a7b-bc0e-602498b44f5a name=/runtime.v1.RuntimeService/Version
	Apr 10 23:11:56 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:11:56.224725918Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=22b15698-1a41-4939-b971-83a0d96cf94a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:11:56 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:11:56.225306157Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712790716225280713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=22b15698-1a41-4939-b971-83a0d96cf94a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:11:56 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:11:56.226175209Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=53405e3f-157c-4c33-b557-335215cd1d7d name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:11:56 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:11:56.226227660Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=53405e3f-157c-4c33-b557-335215cd1d7d name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:11:56 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:11:56.226423951Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195,PodSandboxId:bfca7f6e83b9d3cb6a78e30a6f9cc6c871b892275a04c98d872519b18eb5f674,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712789392385667554,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4e09f42-54ba-480e-a020-1ca071a54558,},Annotations:map[string]string{io.kubernetes.container.hash: 160f84da,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac1dc13f36ea7a5ded1554b7c6697e0987fd40c7ebf17cca475ec8b0b8cfed81,PodSandboxId:a3a388381d1b5b3faacc34b89b54e0a12b7c8f80299767ba86d54d9a14c50050,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1712789371889578822,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3238c558-cbd6-4f3a-b18c-01cf4b1b9d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 320f878f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3,PodSandboxId:e8068ec9c3c4f3650ac51ea3733b91d94bec34626d668d72d72ec69c59563d9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789369289746081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ghnvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88ebd9b0-ecf0-4037-b5b0-547dad2354ba,},Annotations:map[string]string{io.kubernetes.container.hash: e4f85df5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7,PodSandboxId:bfca7f6e83b9d3cb6a78e30a6f9cc6c871b892275a04c98d872519b18eb5f674,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712789361639886058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e4e09f42-54ba-480e-a020-1ca071a54558,},Annotations:map[string]string{io.kubernetes.container.hash: 160f84da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b,PodSandboxId:c397ba3b09882ccc1c830123edfe2babba2ead7db84fddb462ad7ec92d39efbf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712789361635283392,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5mbwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44724487-9539-4079-9fd6
-40cb70208b95,},Annotations:map[string]string{io.kubernetes.container.hash: 3db0b90,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14,PodSandboxId:46bd4334b1632938661e837855bc6ad1ef771620f76d494a084f53a7d4809179,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712789356922394650,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9929847901461a760df7cd
55eacdb8ba,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072,PodSandboxId:ddb7b6f14e3c7a6f45aea5165980feb35944ee67c926ccf9b6f710b0b4392773,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712789356871638389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccc50e24580ad579db03e5cd167e7fa1,},Annotations:map[str
ing]string{io.kubernetes.container.hash: d017430f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c,PodSandboxId:43e313fbc0995dd76558baf805ab503e2074e02a850714fac77905d3afadddb1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712789356842309835,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 863ed51eb16fa172b74df541a53ae3ab,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 8c521e92,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39,PodSandboxId:f06217fe54c3ae56d250a3d9d36b24c714c597e793a37a70b89a989b51b08918,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712789356764603205,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97afe0be93fc66092f9b2a5325da352
b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=53405e3f-157c-4c33-b557-335215cd1d7d name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:11:56 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:11:56.268905338Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=713cff18-ec48-4f0c-a79d-b7bd3ae2a3cd name=/runtime.v1.RuntimeService/Version
	Apr 10 23:11:56 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:11:56.268978273Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=713cff18-ec48-4f0c-a79d-b7bd3ae2a3cd name=/runtime.v1.RuntimeService/Version
	Apr 10 23:11:56 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:11:56.270617742Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f026d4e6-d943-43c2-b1e8-413685ae913d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:11:56 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:11:56.270988453Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712790716270967783,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f026d4e6-d943-43c2-b1e8-413685ae913d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:11:56 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:11:56.271560682Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d756ce1f-35be-4464-bf18-40b9465df331 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:11:56 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:11:56.271611734Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d756ce1f-35be-4464-bf18-40b9465df331 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:11:56 default-k8s-diff-port-519831 crio[723]: time="2024-04-10 23:11:56.271798782Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195,PodSandboxId:bfca7f6e83b9d3cb6a78e30a6f9cc6c871b892275a04c98d872519b18eb5f674,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712789392385667554,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4e09f42-54ba-480e-a020-1ca071a54558,},Annotations:map[string]string{io.kubernetes.container.hash: 160f84da,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac1dc13f36ea7a5ded1554b7c6697e0987fd40c7ebf17cca475ec8b0b8cfed81,PodSandboxId:a3a388381d1b5b3faacc34b89b54e0a12b7c8f80299767ba86d54d9a14c50050,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1712789371889578822,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3238c558-cbd6-4f3a-b18c-01cf4b1b9d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 320f878f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3,PodSandboxId:e8068ec9c3c4f3650ac51ea3733b91d94bec34626d668d72d72ec69c59563d9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789369289746081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ghnvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88ebd9b0-ecf0-4037-b5b0-547dad2354ba,},Annotations:map[string]string{io.kubernetes.container.hash: e4f85df5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7,PodSandboxId:bfca7f6e83b9d3cb6a78e30a6f9cc6c871b892275a04c98d872519b18eb5f674,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712789361639886058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e4e09f42-54ba-480e-a020-1ca071a54558,},Annotations:map[string]string{io.kubernetes.container.hash: 160f84da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b,PodSandboxId:c397ba3b09882ccc1c830123edfe2babba2ead7db84fddb462ad7ec92d39efbf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712789361635283392,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5mbwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44724487-9539-4079-9fd6
-40cb70208b95,},Annotations:map[string]string{io.kubernetes.container.hash: 3db0b90,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14,PodSandboxId:46bd4334b1632938661e837855bc6ad1ef771620f76d494a084f53a7d4809179,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712789356922394650,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9929847901461a760df7cd
55eacdb8ba,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072,PodSandboxId:ddb7b6f14e3c7a6f45aea5165980feb35944ee67c926ccf9b6f710b0b4392773,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712789356871638389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccc50e24580ad579db03e5cd167e7fa1,},Annotations:map[str
ing]string{io.kubernetes.container.hash: d017430f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c,PodSandboxId:43e313fbc0995dd76558baf805ab503e2074e02a850714fac77905d3afadddb1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712789356842309835,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 863ed51eb16fa172b74df541a53ae3ab,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 8c521e92,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39,PodSandboxId:f06217fe54c3ae56d250a3d9d36b24c714c597e793a37a70b89a989b51b08918,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712789356764603205,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-519831,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97afe0be93fc66092f9b2a5325da352
b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d756ce1f-35be-4464-bf18-40b9465df331 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3e97b78e0d5a1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Running             storage-provisioner       2                   bfca7f6e83b9d       storage-provisioner
	ac1dc13f36ea7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   22 minutes ago      Running             busybox                   1                   a3a388381d1b5       busybox
	d0547fcd34655       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      22 minutes ago      Running             coredns                   1                   e8068ec9c3c4f       coredns-76f75df574-ghnvx
	912eddb6d12e8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Exited              storage-provisioner       1                   bfca7f6e83b9d       storage-provisioner
	7c920ae26b3cc       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      22 minutes ago      Running             kube-proxy                1                   c397ba3b09882       kube-proxy-5mbwx
	b9d427d7dee4f       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      22 minutes ago      Running             kube-scheduler            1                   46bd4334b1632       kube-scheduler-default-k8s-diff-port-519831
	34b1b1f972a8e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      22 minutes ago      Running             etcd                      1                   ddb7b6f14e3c7       etcd-default-k8s-diff-port-519831
	74618e834b629       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      22 minutes ago      Running             kube-apiserver            1                   43e313fbc0995       kube-apiserver-default-k8s-diff-port-519831
	c9b5f1abd2321       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      22 minutes ago      Running             kube-controller-manager   1                   f06217fe54c3a       kube-controller-manager-default-k8s-diff-port-519831
	
	
	==> coredns [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:33418 - 55032 "HINFO IN 1503125876999987611.2945278978932795479. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01765054s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-519831
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-519831
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2
	                    minikube.k8s.io/name=default-k8s-diff-port-519831
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_10T22_43_47_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Apr 2024 22:43:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-519831
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Apr 2024 23:11:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Apr 2024 23:10:16 +0000   Wed, 10 Apr 2024 22:43:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Apr 2024 23:10:16 +0000   Wed, 10 Apr 2024 22:43:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Apr 2024 23:10:16 +0000   Wed, 10 Apr 2024 22:43:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Apr 2024 23:10:16 +0000   Wed, 10 Apr 2024 22:49:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.170
	  Hostname:    default-k8s-diff-port-519831
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e6a949113b4840c7820f576b4306ecaf
	  System UUID:                e6a94911-3b48-40c7-820f-576b4306ecaf
	  Boot ID:                    db3de20c-9744-477a-b762-2fb75ae1f894
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-76f75df574-ghnvx                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-default-k8s-diff-port-519831                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-default-k8s-diff-port-519831             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-519831    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-5mbwx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-default-k8s-diff-port-519831             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-57f55c9bc5-9l2hc                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node default-k8s-diff-port-519831 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node default-k8s-diff-port-519831 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node default-k8s-diff-port-519831 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node default-k8s-diff-port-519831 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node default-k8s-diff-port-519831 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     28m                kubelet          Node default-k8s-diff-port-519831 status is now: NodeHasSufficientPID
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeReady                28m                kubelet          Node default-k8s-diff-port-519831 status is now: NodeReady
	  Normal  RegisteredNode           27m                node-controller  Node default-k8s-diff-port-519831 event: Registered Node default-k8s-diff-port-519831 in Controller
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-519831 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-519831 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node default-k8s-diff-port-519831 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m                node-controller  Node default-k8s-diff-port-519831 event: Registered Node default-k8s-diff-port-519831 in Controller
	
	
	==> dmesg <==
	[Apr10 22:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052452] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045077] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.761794] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.915689] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.649830] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr10 22:49] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.064376] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069716] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.179497] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.172233] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.318231] systemd-fstab-generator[707]: Ignoring "noauto" option for root device
	[  +4.893503] systemd-fstab-generator[806]: Ignoring "noauto" option for root device
	[  +0.071288] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.213109] systemd-fstab-generator[931]: Ignoring "noauto" option for root device
	[  +5.645830] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.402995] systemd-fstab-generator[1563]: Ignoring "noauto" option for root device
	[  +3.260151] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.322773] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072] <==
	{"level":"info","ts":"2024-04-10T22:49:33.316528Z","caller":"traceutil/trace.go:171","msg":"trace[642332303] transaction","detail":"{read_only:false; response_revision:573; number_of_response:1; }","duration":"118.035105ms","start":"2024-04-10T22:49:33.198479Z","end":"2024-04-10T22:49:33.316514Z","steps":["trace[642332303] 'process raft request'  (duration: 117.918675ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-10T22:59:19.245292Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":830}
	{"level":"info","ts":"2024-04-10T22:59:19.256752Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":830,"took":"10.768809ms","hash":132352172,"current-db-size-bytes":2588672,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2588672,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-04-10T22:59:19.256861Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":132352172,"revision":830,"compact-revision":-1}
	{"level":"info","ts":"2024-04-10T23:04:19.258236Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1073}
	{"level":"info","ts":"2024-04-10T23:04:19.263177Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1073,"took":"4.530822ms","hash":3053529259,"current-db-size-bytes":2588672,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1630208,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-04-10T23:04:19.263232Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3053529259,"revision":1073,"compact-revision":830}
	{"level":"info","ts":"2024-04-10T23:08:33.190724Z","caller":"traceutil/trace.go:171","msg":"trace[506316672] transaction","detail":"{read_only:false; response_revision:1522; number_of_response:1; }","duration":"105.203223ms","start":"2024-04-10T23:08:33.08547Z","end":"2024-04-10T23:08:33.190674Z","steps":["trace[506316672] 'process raft request'  (duration: 105.041092ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-10T23:08:33.973496Z","caller":"traceutil/trace.go:171","msg":"trace[244056186] transaction","detail":"{read_only:false; response_revision:1523; number_of_response:1; }","duration":"137.843794ms","start":"2024-04-10T23:08:33.835347Z","end":"2024-04-10T23:08:33.973191Z","steps":["trace[244056186] 'process raft request'  (duration: 137.725021ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-10T23:08:54.802441Z","caller":"traceutil/trace.go:171","msg":"trace[446872963] linearizableReadLoop","detail":"{readStateIndex:1812; appliedIndex:1811; }","duration":"144.948585ms","start":"2024-04-10T23:08:54.65746Z","end":"2024-04-10T23:08:54.802409Z","steps":["trace[446872963] 'read index received'  (duration: 144.829594ms)","trace[446872963] 'applied index is now lower than readState.Index'  (duration: 118.375µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-10T23:08:54.802903Z","caller":"traceutil/trace.go:171","msg":"trace[1513754587] transaction","detail":"{read_only:false; response_revision:1540; number_of_response:1; }","duration":"149.443097ms","start":"2024-04-10T23:08:54.653376Z","end":"2024-04-10T23:08:54.802819Z","steps":["trace[1513754587] 'process raft request'  (duration: 148.870461ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-10T23:08:54.802917Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.209889ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-10T23:08:54.803237Z","caller":"traceutil/trace.go:171","msg":"trace[1822292877] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1540; }","duration":"145.784974ms","start":"2024-04-10T23:08:54.657436Z","end":"2024-04-10T23:08:54.803221Z","steps":["trace[1822292877] 'agreement among raft nodes before linearized reading'  (duration: 145.204563ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-10T23:09:19.267365Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1317}
	{"level":"info","ts":"2024-04-10T23:09:19.271909Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1317,"took":"4.302075ms","hash":115727767,"current-db-size-bytes":2588672,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1593344,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-04-10T23:09:19.271967Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":115727767,"revision":1317,"compact-revision":1073}
	{"level":"info","ts":"2024-04-10T23:09:40.54523Z","caller":"traceutil/trace.go:171","msg":"trace[1966154802] transaction","detail":"{read_only:false; response_revision:1577; number_of_response:1; }","duration":"155.385394ms","start":"2024-04-10T23:09:40.389461Z","end":"2024-04-10T23:09:40.544846Z","steps":["trace[1966154802] 'process raft request'  (duration: 155.282067ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-10T23:09:40.807079Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.628477ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-10T23:09:40.807161Z","caller":"traceutil/trace.go:171","msg":"trace[1481582913] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1577; }","duration":"152.81421ms","start":"2024-04-10T23:09:40.65433Z","end":"2024-04-10T23:09:40.807145Z","steps":["trace[1481582913] 'range keys from in-memory index tree'  (duration: 152.565246ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-10T23:10:31.102156Z","caller":"traceutil/trace.go:171","msg":"trace[1666224204] transaction","detail":"{read_only:false; response_revision:1618; number_of_response:1; }","duration":"271.982296ms","start":"2024-04-10T23:10:30.830135Z","end":"2024-04-10T23:10:31.102117Z","steps":["trace[1666224204] 'process raft request'  (duration: 271.78014ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-10T23:10:57.747835Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.493418ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16670793969764106443 > lease_revoke:<id:675a8eca33838080>","response":"size:29"}
	{"level":"info","ts":"2024-04-10T23:10:57.748181Z","caller":"traceutil/trace.go:171","msg":"trace[963120919] linearizableReadLoop","detail":"{readStateIndex:1939; appliedIndex:1938; }","duration":"123.455584ms","start":"2024-04-10T23:10:57.624637Z","end":"2024-04-10T23:10:57.748093Z","steps":["trace[963120919] 'read index received'  (duration: 32.763µs)","trace[963120919] 'applied index is now lower than readState.Index'  (duration: 123.421178ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-10T23:10:57.74836Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.680303ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/\" range_end:\"/registry/daemonsets0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-04-10T23:10:57.748892Z","caller":"traceutil/trace.go:171","msg":"trace[422452565] range","detail":"{range_begin:/registry/daemonsets/; range_end:/registry/daemonsets0; response_count:0; response_revision:1640; }","duration":"124.276642ms","start":"2024-04-10T23:10:57.624606Z","end":"2024-04-10T23:10:57.748883Z","steps":["trace[422452565] 'agreement among raft nodes before linearized reading'  (duration: 123.686125ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-10T23:11:43.67429Z","caller":"traceutil/trace.go:171","msg":"trace[917499993] transaction","detail":"{read_only:false; response_revision:1678; number_of_response:1; }","duration":"106.553232ms","start":"2024-04-10T23:11:43.567701Z","end":"2024-04-10T23:11:43.674254Z","steps":["trace[917499993] 'process raft request'  (duration: 106.030122ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:11:56 up 23 min,  0 users,  load average: 0.16, 0.23, 0.16
	Linux default-k8s-diff-port-519831 5.10.207 #1 SMP Wed Apr 10 14:57:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c] <==
	I0410 23:05:21.721975       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0410 23:07:21.721668       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 23:07:21.721763       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0410 23:07:21.721773       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0410 23:07:21.722903       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 23:07:21.723081       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0410 23:07:21.723116       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0410 23:09:20.722352       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 23:09:20.722505       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0410 23:09:21.723161       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 23:09:21.723392       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0410 23:09:21.723462       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0410 23:09:21.723322       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 23:09:21.723658       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0410 23:09:21.724895       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0410 23:10:21.723662       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 23:10:21.723735       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0410 23:10:21.723754       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0410 23:10:21.726050       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 23:10:21.726176       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0410 23:10:21.726207       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39] <==
	I0410 23:06:04.087599       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:06:33.456865       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:06:34.096637       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:07:03.461760       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:07:04.105507       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:07:33.472058       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:07:34.115641       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:08:03.477722       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:08:04.124476       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:08:33.482731       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:08:34.134078       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:09:03.487759       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:09:04.141783       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:09:33.500551       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:09:34.152505       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:10:03.506770       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:10:04.164275       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:10:33.513185       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:10:34.173497       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0410 23:11:03.177723       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="254.224µs"
	E0410 23:11:03.518723       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:11:04.182230       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0410 23:11:14.183736       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="1.144667ms"
	E0410 23:11:33.537200       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:11:34.192267       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b] <==
	I0410 22:49:21.933511       1 server_others.go:72] "Using iptables proxy"
	I0410 22:49:21.944869       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.72.170"]
	I0410 22:49:21.986389       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0410 22:49:21.986437       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0410 22:49:21.986451       1 server_others.go:168] "Using iptables Proxier"
	I0410 22:49:21.989702       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0410 22:49:21.989921       1 server.go:865] "Version info" version="v1.29.3"
	I0410 22:49:21.989970       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0410 22:49:21.991943       1 config.go:315] "Starting node config controller"
	I0410 22:49:21.991978       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0410 22:49:21.993777       1 config.go:188] "Starting service config controller"
	I0410 22:49:21.993852       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0410 22:49:21.998527       1 config.go:97] "Starting endpoint slice config controller"
	I0410 22:49:21.998653       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0410 22:49:22.092973       1 shared_informer.go:318] Caches are synced for node config
	I0410 22:49:22.099367       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0410 22:49:22.099431       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14] <==
	I0410 22:49:18.217487       1 serving.go:380] Generated self-signed cert in-memory
	W0410 22:49:20.678578       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0410 22:49:20.678623       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0410 22:49:20.678637       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0410 22:49:20.678647       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0410 22:49:20.711086       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0410 22:49:20.711202       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0410 22:49:20.717074       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0410 22:49:20.717258       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0410 22:49:20.720149       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0410 22:49:20.720334       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0410 22:49:20.819163       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 10 23:09:25 default-k8s-diff-port-519831 kubelet[938]: E0410 23:09:25.160306     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9l2hc" podUID="2f5cda2f-4d8f-4798-954e-5ef588f2b88f"
	Apr 10 23:09:40 default-k8s-diff-port-519831 kubelet[938]: E0410 23:09:40.162736     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9l2hc" podUID="2f5cda2f-4d8f-4798-954e-5ef588f2b88f"
	Apr 10 23:09:51 default-k8s-diff-port-519831 kubelet[938]: E0410 23:09:51.160100     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9l2hc" podUID="2f5cda2f-4d8f-4798-954e-5ef588f2b88f"
	Apr 10 23:10:03 default-k8s-diff-port-519831 kubelet[938]: E0410 23:10:03.161607     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9l2hc" podUID="2f5cda2f-4d8f-4798-954e-5ef588f2b88f"
	Apr 10 23:10:16 default-k8s-diff-port-519831 kubelet[938]: E0410 23:10:16.185837     938 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 10 23:10:16 default-k8s-diff-port-519831 kubelet[938]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 10 23:10:16 default-k8s-diff-port-519831 kubelet[938]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 10 23:10:16 default-k8s-diff-port-519831 kubelet[938]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 10 23:10:16 default-k8s-diff-port-519831 kubelet[938]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 10 23:10:18 default-k8s-diff-port-519831 kubelet[938]: E0410 23:10:18.164135     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9l2hc" podUID="2f5cda2f-4d8f-4798-954e-5ef588f2b88f"
	Apr 10 23:10:33 default-k8s-diff-port-519831 kubelet[938]: E0410 23:10:33.161368     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9l2hc" podUID="2f5cda2f-4d8f-4798-954e-5ef588f2b88f"
	Apr 10 23:10:48 default-k8s-diff-port-519831 kubelet[938]: E0410 23:10:48.178273     938 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Apr 10 23:10:48 default-k8s-diff-port-519831 kubelet[938]: E0410 23:10:48.178429     938 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Apr 10 23:10:48 default-k8s-diff-port-519831 kubelet[938]: E0410 23:10:48.179704     938 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5d4k9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:
&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessag
ePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-9l2hc_kube-system(2f5cda2f-4d8f-4798-954e-5ef588f2b88f): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Apr 10 23:10:48 default-k8s-diff-port-519831 kubelet[938]: E0410 23:10:48.179946     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-9l2hc" podUID="2f5cda2f-4d8f-4798-954e-5ef588f2b88f"
	Apr 10 23:11:03 default-k8s-diff-port-519831 kubelet[938]: E0410 23:11:03.161265     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9l2hc" podUID="2f5cda2f-4d8f-4798-954e-5ef588f2b88f"
	Apr 10 23:11:14 default-k8s-diff-port-519831 kubelet[938]: E0410 23:11:14.161491     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9l2hc" podUID="2f5cda2f-4d8f-4798-954e-5ef588f2b88f"
	Apr 10 23:11:16 default-k8s-diff-port-519831 kubelet[938]: E0410 23:11:16.185725     938 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 10 23:11:16 default-k8s-diff-port-519831 kubelet[938]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 10 23:11:16 default-k8s-diff-port-519831 kubelet[938]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 10 23:11:16 default-k8s-diff-port-519831 kubelet[938]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 10 23:11:16 default-k8s-diff-port-519831 kubelet[938]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 10 23:11:26 default-k8s-diff-port-519831 kubelet[938]: E0410 23:11:26.162826     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9l2hc" podUID="2f5cda2f-4d8f-4798-954e-5ef588f2b88f"
	Apr 10 23:11:37 default-k8s-diff-port-519831 kubelet[938]: E0410 23:11:37.161929     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9l2hc" podUID="2f5cda2f-4d8f-4798-954e-5ef588f2b88f"
	Apr 10 23:11:50 default-k8s-diff-port-519831 kubelet[938]: E0410 23:11:50.160963     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9l2hc" podUID="2f5cda2f-4d8f-4798-954e-5ef588f2b88f"
	
	
	==> storage-provisioner [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195] <==
	I0410 22:49:52.497299       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0410 22:49:52.507960       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0410 22:49:52.509176       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0410 22:50:09.915410       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0410 22:50:09.916253       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-519831_a9bc36bb-fa90-48cd-80dd-aa4c941ecc2b!
	I0410 22:50:09.917353       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c0ec6d66-6487-4c61-bc0a-39f866affbb8", APIVersion:"v1", ResourceVersion:"610", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-519831_a9bc36bb-fa90-48cd-80dd-aa4c941ecc2b became leader
	I0410 22:50:10.017379       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-519831_a9bc36bb-fa90-48cd-80dd-aa4c941ecc2b!
	
	
	==> storage-provisioner [912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7] <==
	I0410 22:49:21.912544       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0410 22:49:51.920616       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-519831 -n default-k8s-diff-port-519831
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-519831 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-9l2hc
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-519831 describe pod metrics-server-57f55c9bc5-9l2hc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-519831 describe pod metrics-server-57f55c9bc5-9l2hc: exit status 1 (98.326891ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-9l2hc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-519831 describe pod metrics-server-57f55c9bc5-9l2hc: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (544.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (481.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-706500 -n embed-certs-706500
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-10 23:11:14.087208608 +0000 UTC m=+6196.323637893
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-706500 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-706500 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.521µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-706500 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-706500 -n embed-certs-706500
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-706500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-706500 logs -n 25: (1.426901655s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|---------|----------------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p auto-688825 sudo systemctl                        | auto-688825    | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:10 UTC | 10 Apr 24 23:10 UTC |
	|         | cat kubelet --no-pager                               |                |         |                |                     |                     |
	| ssh     | -p auto-688825 sudo journalctl                       | auto-688825    | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:10 UTC | 10 Apr 24 23:10 UTC |
	|         | -xeu kubelet --all --full                            |                |         |                |                     |                     |
	|         | --no-pager                                           |                |         |                |                     |                     |
	| ssh     | -p auto-688825 sudo cat                              | auto-688825    | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:10 UTC | 10 Apr 24 23:10 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |                |                     |                     |
	| ssh     | -p auto-688825 sudo cat                              | auto-688825    | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:10 UTC | 10 Apr 24 23:10 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |         |                |                     |                     |
	| ssh     | -p auto-688825 sudo systemctl                        | auto-688825    | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:10 UTC |                     |
	|         | status docker --all --full                           |                |         |                |                     |                     |
	|         | --no-pager                                           |                |         |                |                     |                     |
	| ssh     | -p auto-688825 sudo systemctl                        | auto-688825    | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:10 UTC | 10 Apr 24 23:10 UTC |
	|         | cat docker --no-pager                                |                |         |                |                     |                     |
	| ssh     | -p auto-688825 sudo cat                              | auto-688825    | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:10 UTC | 10 Apr 24 23:10 UTC |
	|         | /etc/docker/daemon.json                              |                |         |                |                     |                     |
	| ssh     | -p auto-688825 sudo docker                           | auto-688825    | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:10 UTC |                     |
	|         | system info                                          |                |         |                |                     |                     |
	| ssh     | -p auto-688825 sudo systemctl                        | auto-688825    | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:10 UTC |                     |
	|         | status cri-docker --all --full                       |                |         |                |                     |                     |
	|         | --no-pager                                           |                |         |                |                     |                     |
	| ssh     | -p auto-688825 sudo systemctl                        | auto-688825    | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:10 UTC | 10 Apr 24 23:10 UTC |
	|         | cat cri-docker --no-pager                            |                |         |                |                     |                     |
	| ssh     | -p auto-688825 sudo cat                              | auto-688825    | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:10 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |                |                     |                     |
	| ssh     | -p auto-688825 sudo cat                              | auto-688825    | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:10 UTC | 10 Apr 24 23:10 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |                |                     |                     |
	| ssh     | -p auto-688825 sudo                                  | auto-688825    | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:10 UTC | 10 Apr 24 23:10 UTC |
	|         | cri-dockerd --version                                |                |         |                |                     |                     |
	| ssh     | -p auto-688825 sudo systemctl                        | auto-688825    | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:10 UTC |                     |
	|         | status containerd --all --full                       |                |         |                |                     |                     |
	|         | --no-pager                                           |                |         |                |                     |                     |
	| ssh     | -p auto-688825 sudo systemctl                        | auto-688825    | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:10 UTC | 10 Apr 24 23:10 UTC |
	|         | cat containerd --no-pager                            |                |         |                |                     |                     |
	| ssh     | -p auto-688825 sudo cat                              | auto-688825    | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:10 UTC | 10 Apr 24 23:10 UTC |
	|         | /lib/systemd/system/containerd.service               |                |         |                |                     |                     |
	| ssh     | -p auto-688825 sudo cat                              | auto-688825    | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:10 UTC | 10 Apr 24 23:10 UTC |
	|         | /etc/containerd/config.toml                          |                |         |                |                     |                     |
	| ssh     | -p auto-688825 sudo containerd                       | auto-688825    | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:10 UTC | 10 Apr 24 23:10 UTC |
	|         | config dump                                          |                |         |                |                     |                     |
	| ssh     | -p auto-688825 sudo systemctl                        | auto-688825    | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:10 UTC | 10 Apr 24 23:10 UTC |
	|         | status crio --all --full                             |                |         |                |                     |                     |
	|         | --no-pager                                           |                |         |                |                     |                     |
	| ssh     | -p auto-688825 sudo systemctl                        | auto-688825    | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:10 UTC | 10 Apr 24 23:10 UTC |
	|         | cat crio --no-pager                                  |                |         |                |                     |                     |
	| ssh     | -p auto-688825 sudo find                             | auto-688825    | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:10 UTC | 10 Apr 24 23:10 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |                |                     |                     |
	| ssh     | -p auto-688825 sudo crio                             | auto-688825    | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:10 UTC | 10 Apr 24 23:10 UTC |
	|         | config                                               |                |         |                |                     |                     |
	| delete  | -p auto-688825                                       | auto-688825    | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:10 UTC | 10 Apr 24 23:10 UTC |
	| start   | -p calico-688825 --memory=3072                       | calico-688825  | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:10 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                |         |                |                     |                     |
	|         | --wait-timeout=15m                                   |                |         |                |                     |                     |
	|         | --cni=calico --driver=kvm2                           |                |         |                |                     |                     |
	|         | --container-runtime=crio                             |                |         |                |                     |                     |
	| ssh     | -p kindnet-688825 pgrep -a                           | kindnet-688825 | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:11 UTC | 10 Apr 24 23:11 UTC |
	|         | kubelet                                              |                |         |                |                     |                     |
	|---------|------------------------------------------------------|----------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/10 23:10:25
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0410 23:10:25.998733   67722 out.go:291] Setting OutFile to fd 1 ...
	I0410 23:10:25.999036   67722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 23:10:25.999051   67722 out.go:304] Setting ErrFile to fd 2...
	I0410 23:10:25.999058   67722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 23:10:25.999373   67722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 23:10:25.999989   67722 out.go:298] Setting JSON to false
	I0410 23:10:26.001084   67722 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6768,"bootTime":1712783858,"procs":303,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0410 23:10:26.001146   67722 start.go:139] virtualization: kvm guest
	I0410 23:10:26.003647   67722 out.go:177] * [calico-688825] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0410 23:10:26.005112   67722 out.go:177]   - MINIKUBE_LOCATION=18610
	I0410 23:10:26.005120   67722 notify.go:220] Checking for updates...
	I0410 23:10:26.006527   67722 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0410 23:10:26.008126   67722 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 23:10:26.009506   67722 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 23:10:26.010734   67722 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0410 23:10:26.012031   67722 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0410 23:10:26.013973   67722 config.go:182] Loaded profile config "default-k8s-diff-port-519831": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 23:10:26.014115   67722 config.go:182] Loaded profile config "embed-certs-706500": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 23:10:26.014248   67722 config.go:182] Loaded profile config "kindnet-688825": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 23:10:26.014398   67722 driver.go:392] Setting default libvirt URI to qemu:///system
	I0410 23:10:26.055093   67722 out.go:177] * Using the kvm2 driver based on user configuration
	I0410 23:10:26.056491   67722 start.go:297] selected driver: kvm2
	I0410 23:10:26.056504   67722 start.go:901] validating driver "kvm2" against <nil>
	I0410 23:10:26.056515   67722 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0410 23:10:26.057429   67722 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 23:10:26.057536   67722 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18610-5679/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0410 23:10:26.074476   67722 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0410 23:10:26.074567   67722 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0410 23:10:26.074892   67722 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 23:10:26.074983   67722 cni.go:84] Creating CNI manager for "calico"
	I0410 23:10:26.075002   67722 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0410 23:10:26.075065   67722 start.go:340] cluster config:
	{Name:calico-688825 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:calico-688825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 23:10:26.075207   67722 iso.go:125] acquiring lock: {Name:mk5a268a8a4dbf02629446a098c1de60574dce10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 23:10:26.077333   67722 out.go:177] * Starting "calico-688825" primary control-plane node in "calico-688825" cluster
	I0410 23:10:26.078866   67722 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 23:10:26.078913   67722 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0410 23:10:26.078940   67722 cache.go:56] Caching tarball of preloaded images
	I0410 23:10:26.079032   67722 preload.go:173] Found /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0410 23:10:26.079050   67722 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0410 23:10:26.079190   67722 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/calico-688825/config.json ...
	I0410 23:10:26.079223   67722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/calico-688825/config.json: {Name:mkfd0d2fa1acdbf5591c3a1e4b79e0e50db5beca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 23:10:26.079419   67722 start.go:360] acquireMachinesLock for calico-688825: {Name:mkcfcfa8b01b63726303cd37d33f01d452118635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0410 23:10:26.079468   67722 start.go:364] duration metric: took 27.014µs to acquireMachinesLock for "calico-688825"
	I0410 23:10:26.079494   67722 start.go:93] Provisioning new machine with config: &{Name:calico-688825 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:calico-688825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0410 23:10:26.079582   67722 start.go:125] createHost starting for "" (driver="kvm2")
	I0410 23:10:25.989353   66181 main.go:141] libmachine: (kindnet-688825) Calling .GetIP
	I0410 23:10:25.992837   66181 main.go:141] libmachine: (kindnet-688825) DBG | domain kindnet-688825 has defined MAC address 52:54:00:29:4d:75 in network mk-kindnet-688825
	I0410 23:10:25.993224   66181 main.go:141] libmachine: (kindnet-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:4d:75", ip: ""} in network mk-kindnet-688825: {Iface:virbr1 ExpiryTime:2024-04-11 00:10:10 +0000 UTC Type:0 Mac:52:54:00:29:4d:75 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:kindnet-688825 Clientid:01:52:54:00:29:4d:75}
	I0410 23:10:25.993249   66181 main.go:141] libmachine: (kindnet-688825) DBG | domain kindnet-688825 has defined IP address 192.168.61.225 and MAC address 52:54:00:29:4d:75 in network mk-kindnet-688825
	I0410 23:10:25.993474   66181 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0410 23:10:25.998580   66181 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 23:10:26.014274   66181 kubeadm.go:877] updating cluster {Name:kindnet-688825 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:kindnet-688825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 23:10:26.014404   66181 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 23:10:26.014481   66181 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 23:10:26.050667   66181 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0410 23:10:26.050727   66181 ssh_runner.go:195] Run: which lz4
	I0410 23:10:26.056163   66181 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0410 23:10:26.060957   66181 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0410 23:10:26.060992   66181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0410 23:10:27.714046   66181 crio.go:462] duration metric: took 1.657928786s to copy over tarball
	I0410 23:10:27.714121   66181 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0410 23:10:26.081407   67722 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0410 23:10:26.081544   67722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 23:10:26.081578   67722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 23:10:26.097462   67722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38751
	I0410 23:10:26.097948   67722 main.go:141] libmachine: () Calling .GetVersion
	I0410 23:10:26.098727   67722 main.go:141] libmachine: Using API Version  1
	I0410 23:10:26.098763   67722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 23:10:26.099125   67722 main.go:141] libmachine: () Calling .GetMachineName
	I0410 23:10:26.099392   67722 main.go:141] libmachine: (calico-688825) Calling .GetMachineName
	I0410 23:10:26.099556   67722 main.go:141] libmachine: (calico-688825) Calling .DriverName
	I0410 23:10:26.099733   67722 start.go:159] libmachine.API.Create for "calico-688825" (driver="kvm2")
	I0410 23:10:26.099797   67722 client.go:168] LocalClient.Create starting
	I0410 23:10:26.099835   67722 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem
	I0410 23:10:26.099881   67722 main.go:141] libmachine: Decoding PEM data...
	I0410 23:10:26.099903   67722 main.go:141] libmachine: Parsing certificate...
	I0410 23:10:26.099976   67722 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem
	I0410 23:10:26.100006   67722 main.go:141] libmachine: Decoding PEM data...
	I0410 23:10:26.100029   67722 main.go:141] libmachine: Parsing certificate...
	I0410 23:10:26.100063   67722 main.go:141] libmachine: Running pre-create checks...
	I0410 23:10:26.100076   67722 main.go:141] libmachine: (calico-688825) Calling .PreCreateCheck
	I0410 23:10:26.100388   67722 main.go:141] libmachine: (calico-688825) Calling .GetConfigRaw
	I0410 23:10:26.100873   67722 main.go:141] libmachine: Creating machine...
	I0410 23:10:26.100893   67722 main.go:141] libmachine: (calico-688825) Calling .Create
	I0410 23:10:26.101045   67722 main.go:141] libmachine: (calico-688825) Creating KVM machine...
	I0410 23:10:26.102608   67722 main.go:141] libmachine: (calico-688825) DBG | found existing default KVM network
	I0410 23:10:26.103668   67722 main.go:141] libmachine: (calico-688825) DBG | I0410 23:10:26.103538   67744 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:57:4f:87} reservation:<nil>}
	I0410 23:10:26.104737   67722 main.go:141] libmachine: (calico-688825) DBG | I0410 23:10:26.104645   67744 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000215f50}
	I0410 23:10:26.104777   67722 main.go:141] libmachine: (calico-688825) DBG | created network xml: 
	I0410 23:10:26.104794   67722 main.go:141] libmachine: (calico-688825) DBG | <network>
	I0410 23:10:26.104804   67722 main.go:141] libmachine: (calico-688825) DBG |   <name>mk-calico-688825</name>
	I0410 23:10:26.104816   67722 main.go:141] libmachine: (calico-688825) DBG |   <dns enable='no'/>
	I0410 23:10:26.104866   67722 main.go:141] libmachine: (calico-688825) DBG |   
	I0410 23:10:26.104893   67722 main.go:141] libmachine: (calico-688825) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0410 23:10:26.104958   67722 main.go:141] libmachine: (calico-688825) DBG |     <dhcp>
	I0410 23:10:26.104985   67722 main.go:141] libmachine: (calico-688825) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0410 23:10:26.105014   67722 main.go:141] libmachine: (calico-688825) DBG |     </dhcp>
	I0410 23:10:26.105042   67722 main.go:141] libmachine: (calico-688825) DBG |   </ip>
	I0410 23:10:26.105059   67722 main.go:141] libmachine: (calico-688825) DBG |   
	I0410 23:10:26.105073   67722 main.go:141] libmachine: (calico-688825) DBG | </network>
	I0410 23:10:26.105088   67722 main.go:141] libmachine: (calico-688825) DBG | 
	I0410 23:10:26.110172   67722 main.go:141] libmachine: (calico-688825) DBG | trying to create private KVM network mk-calico-688825 192.168.50.0/24...
	I0410 23:10:26.191201   67722 main.go:141] libmachine: (calico-688825) Setting up store path in /home/jenkins/minikube-integration/18610-5679/.minikube/machines/calico-688825 ...
	I0410 23:10:26.191239   67722 main.go:141] libmachine: (calico-688825) Building disk image from file:///home/jenkins/minikube-integration/18610-5679/.minikube/cache/iso/amd64/minikube-v1.33.0-1712743565-18610-amd64.iso
	I0410 23:10:26.191251   67722 main.go:141] libmachine: (calico-688825) DBG | private KVM network mk-calico-688825 192.168.50.0/24 created
	I0410 23:10:26.191271   67722 main.go:141] libmachine: (calico-688825) DBG | I0410 23:10:26.191139   67744 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 23:10:26.191305   67722 main.go:141] libmachine: (calico-688825) Downloading /home/jenkins/minikube-integration/18610-5679/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18610-5679/.minikube/cache/iso/amd64/minikube-v1.33.0-1712743565-18610-amd64.iso...
	I0410 23:10:26.451550   67722 main.go:141] libmachine: (calico-688825) DBG | I0410 23:10:26.451376   67744 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/calico-688825/id_rsa...
	I0410 23:10:26.567026   67722 main.go:141] libmachine: (calico-688825) DBG | I0410 23:10:26.566871   67744 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/calico-688825/calico-688825.rawdisk...
	I0410 23:10:26.567057   67722 main.go:141] libmachine: (calico-688825) DBG | Writing magic tar header
	I0410 23:10:26.567072   67722 main.go:141] libmachine: (calico-688825) DBG | Writing SSH key tar header
	I0410 23:10:26.567083   67722 main.go:141] libmachine: (calico-688825) DBG | I0410 23:10:26.567049   67744 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18610-5679/.minikube/machines/calico-688825 ...
	I0410 23:10:26.567229   67722 main.go:141] libmachine: (calico-688825) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/calico-688825
	I0410 23:10:26.567279   67722 main.go:141] libmachine: (calico-688825) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18610-5679/.minikube/machines
	I0410 23:10:26.567298   67722 main.go:141] libmachine: (calico-688825) Setting executable bit set on /home/jenkins/minikube-integration/18610-5679/.minikube/machines/calico-688825 (perms=drwx------)
	I0410 23:10:26.567317   67722 main.go:141] libmachine: (calico-688825) Setting executable bit set on /home/jenkins/minikube-integration/18610-5679/.minikube/machines (perms=drwxr-xr-x)
	I0410 23:10:26.567331   67722 main.go:141] libmachine: (calico-688825) Setting executable bit set on /home/jenkins/minikube-integration/18610-5679/.minikube (perms=drwxr-xr-x)
	I0410 23:10:26.567346   67722 main.go:141] libmachine: (calico-688825) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 23:10:26.567360   67722 main.go:141] libmachine: (calico-688825) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18610-5679
	I0410 23:10:26.567372   67722 main.go:141] libmachine: (calico-688825) Setting executable bit set on /home/jenkins/minikube-integration/18610-5679 (perms=drwxrwxr-x)
	I0410 23:10:26.567389   67722 main.go:141] libmachine: (calico-688825) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0410 23:10:26.567403   67722 main.go:141] libmachine: (calico-688825) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0410 23:10:26.567416   67722 main.go:141] libmachine: (calico-688825) Creating domain...
	I0410 23:10:26.567431   67722 main.go:141] libmachine: (calico-688825) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0410 23:10:26.567456   67722 main.go:141] libmachine: (calico-688825) DBG | Checking permissions on dir: /home/jenkins
	I0410 23:10:26.567466   67722 main.go:141] libmachine: (calico-688825) DBG | Checking permissions on dir: /home
	I0410 23:10:26.567481   67722 main.go:141] libmachine: (calico-688825) DBG | Skipping /home - not owner
	I0410 23:10:26.568542   67722 main.go:141] libmachine: (calico-688825) define libvirt domain using xml: 
	I0410 23:10:26.568555   67722 main.go:141] libmachine: (calico-688825) <domain type='kvm'>
	I0410 23:10:26.568562   67722 main.go:141] libmachine: (calico-688825)   <name>calico-688825</name>
	I0410 23:10:26.568570   67722 main.go:141] libmachine: (calico-688825)   <memory unit='MiB'>3072</memory>
	I0410 23:10:26.568576   67722 main.go:141] libmachine: (calico-688825)   <vcpu>2</vcpu>
	I0410 23:10:26.568591   67722 main.go:141] libmachine: (calico-688825)   <features>
	I0410 23:10:26.568601   67722 main.go:141] libmachine: (calico-688825)     <acpi/>
	I0410 23:10:26.568607   67722 main.go:141] libmachine: (calico-688825)     <apic/>
	I0410 23:10:26.568621   67722 main.go:141] libmachine: (calico-688825)     <pae/>
	I0410 23:10:26.568632   67722 main.go:141] libmachine: (calico-688825)     
	I0410 23:10:26.568641   67722 main.go:141] libmachine: (calico-688825)   </features>
	I0410 23:10:26.568657   67722 main.go:141] libmachine: (calico-688825)   <cpu mode='host-passthrough'>
	I0410 23:10:26.568665   67722 main.go:141] libmachine: (calico-688825)   
	I0410 23:10:26.568676   67722 main.go:141] libmachine: (calico-688825)   </cpu>
	I0410 23:10:26.568687   67722 main.go:141] libmachine: (calico-688825)   <os>
	I0410 23:10:26.568697   67722 main.go:141] libmachine: (calico-688825)     <type>hvm</type>
	I0410 23:10:26.568709   67722 main.go:141] libmachine: (calico-688825)     <boot dev='cdrom'/>
	I0410 23:10:26.568718   67722 main.go:141] libmachine: (calico-688825)     <boot dev='hd'/>
	I0410 23:10:26.568754   67722 main.go:141] libmachine: (calico-688825)     <bootmenu enable='no'/>
	I0410 23:10:26.568783   67722 main.go:141] libmachine: (calico-688825)   </os>
	I0410 23:10:26.568794   67722 main.go:141] libmachine: (calico-688825)   <devices>
	I0410 23:10:26.568802   67722 main.go:141] libmachine: (calico-688825)     <disk type='file' device='cdrom'>
	I0410 23:10:26.568815   67722 main.go:141] libmachine: (calico-688825)       <source file='/home/jenkins/minikube-integration/18610-5679/.minikube/machines/calico-688825/boot2docker.iso'/>
	I0410 23:10:26.568827   67722 main.go:141] libmachine: (calico-688825)       <target dev='hdc' bus='scsi'/>
	I0410 23:10:26.568835   67722 main.go:141] libmachine: (calico-688825)       <readonly/>
	I0410 23:10:26.568846   67722 main.go:141] libmachine: (calico-688825)     </disk>
	I0410 23:10:26.568854   67722 main.go:141] libmachine: (calico-688825)     <disk type='file' device='disk'>
	I0410 23:10:26.568866   67722 main.go:141] libmachine: (calico-688825)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0410 23:10:26.568879   67722 main.go:141] libmachine: (calico-688825)       <source file='/home/jenkins/minikube-integration/18610-5679/.minikube/machines/calico-688825/calico-688825.rawdisk'/>
	I0410 23:10:26.568887   67722 main.go:141] libmachine: (calico-688825)       <target dev='hda' bus='virtio'/>
	I0410 23:10:26.568900   67722 main.go:141] libmachine: (calico-688825)     </disk>
	I0410 23:10:26.568907   67722 main.go:141] libmachine: (calico-688825)     <interface type='network'>
	I0410 23:10:26.568919   67722 main.go:141] libmachine: (calico-688825)       <source network='mk-calico-688825'/>
	I0410 23:10:26.568931   67722 main.go:141] libmachine: (calico-688825)       <model type='virtio'/>
	I0410 23:10:26.568943   67722 main.go:141] libmachine: (calico-688825)     </interface>
	I0410 23:10:26.568951   67722 main.go:141] libmachine: (calico-688825)     <interface type='network'>
	I0410 23:10:26.568959   67722 main.go:141] libmachine: (calico-688825)       <source network='default'/>
	I0410 23:10:26.568967   67722 main.go:141] libmachine: (calico-688825)       <model type='virtio'/>
	I0410 23:10:26.568978   67722 main.go:141] libmachine: (calico-688825)     </interface>
	I0410 23:10:26.568986   67722 main.go:141] libmachine: (calico-688825)     <serial type='pty'>
	I0410 23:10:26.568997   67722 main.go:141] libmachine: (calico-688825)       <target port='0'/>
	I0410 23:10:26.569005   67722 main.go:141] libmachine: (calico-688825)     </serial>
	I0410 23:10:26.569016   67722 main.go:141] libmachine: (calico-688825)     <console type='pty'>
	I0410 23:10:26.569026   67722 main.go:141] libmachine: (calico-688825)       <target type='serial' port='0'/>
	I0410 23:10:26.569031   67722 main.go:141] libmachine: (calico-688825)     </console>
	I0410 23:10:26.569038   67722 main.go:141] libmachine: (calico-688825)     <rng model='virtio'>
	I0410 23:10:26.569044   67722 main.go:141] libmachine: (calico-688825)       <backend model='random'>/dev/random</backend>
	I0410 23:10:26.569051   67722 main.go:141] libmachine: (calico-688825)     </rng>
	I0410 23:10:26.569056   67722 main.go:141] libmachine: (calico-688825)     
	I0410 23:10:26.569062   67722 main.go:141] libmachine: (calico-688825)     
	I0410 23:10:26.569068   67722 main.go:141] libmachine: (calico-688825)   </devices>
	I0410 23:10:26.569076   67722 main.go:141] libmachine: (calico-688825) </domain>
	I0410 23:10:26.569086   67722 main.go:141] libmachine: (calico-688825) 
	I0410 23:10:26.574041   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:94:ad:7b in network default
	I0410 23:10:26.574878   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:26.574908   67722 main.go:141] libmachine: (calico-688825) Ensuring networks are active...
	I0410 23:10:26.575703   67722 main.go:141] libmachine: (calico-688825) Ensuring network default is active
	I0410 23:10:26.576032   67722 main.go:141] libmachine: (calico-688825) Ensuring network mk-calico-688825 is active
	I0410 23:10:26.576754   67722 main.go:141] libmachine: (calico-688825) Getting domain xml...
	I0410 23:10:26.577810   67722 main.go:141] libmachine: (calico-688825) Creating domain...
	I0410 23:10:28.157164   67722 main.go:141] libmachine: (calico-688825) Waiting to get IP...
	I0410 23:10:28.158033   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:28.158482   67722 main.go:141] libmachine: (calico-688825) DBG | unable to find current IP address of domain calico-688825 in network mk-calico-688825
	I0410 23:10:28.158503   67722 main.go:141] libmachine: (calico-688825) DBG | I0410 23:10:28.158450   67744 retry.go:31] will retry after 207.451028ms: waiting for machine to come up
	I0410 23:10:28.368099   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:28.368660   67722 main.go:141] libmachine: (calico-688825) DBG | unable to find current IP address of domain calico-688825 in network mk-calico-688825
	I0410 23:10:28.368690   67722 main.go:141] libmachine: (calico-688825) DBG | I0410 23:10:28.368604   67744 retry.go:31] will retry after 249.491035ms: waiting for machine to come up
	I0410 23:10:28.620500   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:28.621289   67722 main.go:141] libmachine: (calico-688825) DBG | unable to find current IP address of domain calico-688825 in network mk-calico-688825
	I0410 23:10:28.621313   67722 main.go:141] libmachine: (calico-688825) DBG | I0410 23:10:28.621205   67744 retry.go:31] will retry after 438.786749ms: waiting for machine to come up
	I0410 23:10:29.061956   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:29.062514   67722 main.go:141] libmachine: (calico-688825) DBG | unable to find current IP address of domain calico-688825 in network mk-calico-688825
	I0410 23:10:29.062551   67722 main.go:141] libmachine: (calico-688825) DBG | I0410 23:10:29.062446   67744 retry.go:31] will retry after 467.804988ms: waiting for machine to come up
	I0410 23:10:29.532315   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:29.532930   67722 main.go:141] libmachine: (calico-688825) DBG | unable to find current IP address of domain calico-688825 in network mk-calico-688825
	I0410 23:10:29.532952   67722 main.go:141] libmachine: (calico-688825) DBG | I0410 23:10:29.532875   67744 retry.go:31] will retry after 666.469088ms: waiting for machine to come up
	I0410 23:10:30.200609   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:30.201127   67722 main.go:141] libmachine: (calico-688825) DBG | unable to find current IP address of domain calico-688825 in network mk-calico-688825
	I0410 23:10:30.201160   67722 main.go:141] libmachine: (calico-688825) DBG | I0410 23:10:30.201083   67744 retry.go:31] will retry after 838.020891ms: waiting for machine to come up
	I0410 23:10:30.513260   66181 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.799101989s)
	I0410 23:10:30.513285   66181 crio.go:469] duration metric: took 2.799209212s to extract the tarball
	I0410 23:10:30.513292   66181 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0410 23:10:30.554453   66181 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 23:10:30.611425   66181 crio.go:514] all images are preloaded for cri-o runtime.
	I0410 23:10:30.611451   66181 cache_images.go:84] Images are preloaded, skipping loading
	I0410 23:10:30.611459   66181 kubeadm.go:928] updating node { 192.168.61.225 8443 v1.29.3 crio true true} ...
	I0410 23:10:30.611575   66181 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-688825 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:kindnet-688825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0410 23:10:30.611651   66181 ssh_runner.go:195] Run: crio config
	I0410 23:10:30.663578   66181 cni.go:84] Creating CNI manager for "kindnet"
	I0410 23:10:30.663602   66181 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 23:10:30.663628   66181 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.225 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-688825 NodeName:kindnet-688825 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.225"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.225 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0410 23:10:30.663767   66181 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.225
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-688825"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.225
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.225"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 23:10:30.663830   66181 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0410 23:10:30.676876   66181 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 23:10:30.676943   66181 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 23:10:30.688298   66181 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0410 23:10:30.708547   66181 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0410 23:10:30.728117   66181 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0410 23:10:30.748570   66181 ssh_runner.go:195] Run: grep 192.168.61.225	control-plane.minikube.internal$ /etc/hosts
	I0410 23:10:30.755341   66181 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.225	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 23:10:30.776893   66181 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 23:10:30.933182   66181 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 23:10:30.959448   66181 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kindnet-688825 for IP: 192.168.61.225
	I0410 23:10:30.959473   66181 certs.go:194] generating shared ca certs ...
	I0410 23:10:30.959506   66181 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 23:10:30.959661   66181 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 23:10:30.959700   66181 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 23:10:30.959708   66181 certs.go:256] generating profile certs ...
	I0410 23:10:30.959756   66181 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kindnet-688825/client.key
	I0410 23:10:30.959769   66181 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kindnet-688825/client.crt with IP's: []
	I0410 23:10:31.262399   66181 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kindnet-688825/client.crt ...
	I0410 23:10:31.262433   66181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kindnet-688825/client.crt: {Name:mkfea6785ed9b027b2f619c170e37999dd921882 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 23:10:31.262627   66181 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kindnet-688825/client.key ...
	I0410 23:10:31.262645   66181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kindnet-688825/client.key: {Name:mk42145abb63ba70c57b7d4f732fef2cf966cc62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 23:10:31.262758   66181 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kindnet-688825/apiserver.key.532286f8
	I0410 23:10:31.262780   66181 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kindnet-688825/apiserver.crt.532286f8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.225]
	I0410 23:10:31.340289   66181 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kindnet-688825/apiserver.crt.532286f8 ...
	I0410 23:10:31.340319   66181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kindnet-688825/apiserver.crt.532286f8: {Name:mk8968a94a387ffb366a740bb26c1cff87194b52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 23:10:31.340505   66181 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kindnet-688825/apiserver.key.532286f8 ...
	I0410 23:10:31.340529   66181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kindnet-688825/apiserver.key.532286f8: {Name:mkab32a91ff18af9f1bd44702ac0989596faefac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 23:10:31.340633   66181 certs.go:381] copying /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kindnet-688825/apiserver.crt.532286f8 -> /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kindnet-688825/apiserver.crt
	I0410 23:10:31.340736   66181 certs.go:385] copying /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kindnet-688825/apiserver.key.532286f8 -> /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kindnet-688825/apiserver.key
	I0410 23:10:31.340822   66181 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kindnet-688825/proxy-client.key
	I0410 23:10:31.340846   66181 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kindnet-688825/proxy-client.crt with IP's: []
	I0410 23:10:31.497032   66181 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kindnet-688825/proxy-client.crt ...
	I0410 23:10:31.497063   66181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kindnet-688825/proxy-client.crt: {Name:mkab083f8f553b022fc9cfc68affa3ce43f8ab61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 23:10:31.497265   66181 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kindnet-688825/proxy-client.key ...
	I0410 23:10:31.497285   66181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kindnet-688825/proxy-client.key: {Name:mk65bb17716bc3446799ddae0e1bff0a327f584a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 23:10:31.497462   66181 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 23:10:31.497515   66181 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 23:10:31.497530   66181 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 23:10:31.497559   66181 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 23:10:31.497590   66181 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 23:10:31.497629   66181 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 23:10:31.497683   66181 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 23:10:31.498475   66181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 23:10:31.529573   66181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 23:10:31.558463   66181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 23:10:31.587888   66181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 23:10:31.617839   66181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kindnet-688825/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0410 23:10:31.655425   66181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kindnet-688825/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0410 23:10:31.686392   66181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kindnet-688825/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 23:10:31.710453   66181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/kindnet-688825/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0410 23:10:31.742030   66181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 23:10:31.774423   66181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 23:10:31.802907   66181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 23:10:31.831816   66181 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 23:10:31.849690   66181 ssh_runner.go:195] Run: openssl version
	I0410 23:10:31.856222   66181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 23:10:31.868611   66181 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 23:10:31.873633   66181 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 23:10:31.873691   66181 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 23:10:31.880022   66181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 23:10:31.891741   66181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 23:10:31.903429   66181 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 23:10:31.908696   66181 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 23:10:31.908802   66181 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 23:10:31.915357   66181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 23:10:31.927238   66181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 23:10:31.938366   66181 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 23:10:31.944360   66181 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 23:10:31.944427   66181 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 23:10:31.952241   66181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 23:10:31.964626   66181 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 23:10:31.969248   66181 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0410 23:10:31.969307   66181 kubeadm.go:391] StartCluster: {Name:kindnet-688825 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:kindnet-688825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 23:10:31.969401   66181 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 23:10:31.969446   66181 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 23:10:32.013421   66181 cri.go:89] found id: ""
	I0410 23:10:32.013493   66181 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0410 23:10:32.024751   66181 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 23:10:32.035713   66181 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 23:10:32.046133   66181 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 23:10:32.046183   66181 kubeadm.go:156] found existing configuration files:
	
	I0410 23:10:32.046238   66181 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 23:10:32.056079   66181 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 23:10:32.056140   66181 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 23:10:32.066515   66181 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 23:10:32.076506   66181 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 23:10:32.076584   66181 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 23:10:32.087074   66181 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 23:10:32.096854   66181 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 23:10:32.096917   66181 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 23:10:32.107724   66181 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 23:10:32.117761   66181 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 23:10:32.117822   66181 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 23:10:32.128855   66181 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 23:10:32.193382   66181 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0410 23:10:32.193764   66181 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 23:10:32.344326   66181 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 23:10:32.344500   66181 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 23:10:32.344642   66181 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 23:10:32.584075   66181 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 23:10:32.586983   66181 out.go:204]   - Generating certificates and keys ...
	I0410 23:10:32.587114   66181 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 23:10:32.587213   66181 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 23:10:32.742995   66181 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0410 23:10:32.915920   66181 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0410 23:10:33.203245   66181 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0410 23:10:33.403285   66181 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0410 23:10:33.486386   66181 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0410 23:10:33.486687   66181 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kindnet-688825 localhost] and IPs [192.168.61.225 127.0.0.1 ::1]
	I0410 23:10:33.717989   66181 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0410 23:10:33.718391   66181 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kindnet-688825 localhost] and IPs [192.168.61.225 127.0.0.1 ::1]
	I0410 23:10:33.880981   66181 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0410 23:10:34.096791   66181 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0410 23:10:34.378219   66181 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0410 23:10:34.378715   66181 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 23:10:34.594620   66181 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 23:10:34.872706   66181 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0410 23:10:34.950156   66181 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 23:10:35.182676   66181 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 23:10:35.345525   66181 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 23:10:35.346278   66181 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 23:10:35.348746   66181 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 23:10:31.041248   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:31.041779   67722 main.go:141] libmachine: (calico-688825) DBG | unable to find current IP address of domain calico-688825 in network mk-calico-688825
	I0410 23:10:31.041802   67722 main.go:141] libmachine: (calico-688825) DBG | I0410 23:10:31.041736   67744 retry.go:31] will retry after 1.018249568s: waiting for machine to come up
	I0410 23:10:32.061144   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:32.061716   67722 main.go:141] libmachine: (calico-688825) DBG | unable to find current IP address of domain calico-688825 in network mk-calico-688825
	I0410 23:10:32.061743   67722 main.go:141] libmachine: (calico-688825) DBG | I0410 23:10:32.061658   67744 retry.go:31] will retry after 1.483048837s: waiting for machine to come up
	I0410 23:10:33.545953   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:33.546494   67722 main.go:141] libmachine: (calico-688825) DBG | unable to find current IP address of domain calico-688825 in network mk-calico-688825
	I0410 23:10:33.546521   67722 main.go:141] libmachine: (calico-688825) DBG | I0410 23:10:33.546451   67744 retry.go:31] will retry after 1.76423118s: waiting for machine to come up
	I0410 23:10:35.312115   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:35.312613   67722 main.go:141] libmachine: (calico-688825) DBG | unable to find current IP address of domain calico-688825 in network mk-calico-688825
	I0410 23:10:35.312631   67722 main.go:141] libmachine: (calico-688825) DBG | I0410 23:10:35.312591   67744 retry.go:31] will retry after 1.518029054s: waiting for machine to come up
	I0410 23:10:35.350748   66181 out.go:204]   - Booting up control plane ...
	I0410 23:10:35.350873   66181 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 23:10:35.351001   66181 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 23:10:35.351156   66181 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 23:10:35.370074   66181 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 23:10:35.370953   66181 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 23:10:35.371028   66181 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 23:10:35.511935   66181 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0410 23:10:36.832804   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:36.833340   67722 main.go:141] libmachine: (calico-688825) DBG | unable to find current IP address of domain calico-688825 in network mk-calico-688825
	I0410 23:10:36.833385   67722 main.go:141] libmachine: (calico-688825) DBG | I0410 23:10:36.833313   67744 retry.go:31] will retry after 2.675935561s: waiting for machine to come up
	I0410 23:10:39.510774   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:39.511266   67722 main.go:141] libmachine: (calico-688825) DBG | unable to find current IP address of domain calico-688825 in network mk-calico-688825
	I0410 23:10:39.511296   67722 main.go:141] libmachine: (calico-688825) DBG | I0410 23:10:39.511214   67744 retry.go:31] will retry after 3.285780537s: waiting for machine to come up
	I0410 23:10:42.017217   66181 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.503027 seconds
	I0410 23:10:42.034683   66181 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0410 23:10:42.049694   66181 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0410 23:10:42.586206   66181 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0410 23:10:42.586493   66181 kubeadm.go:309] [mark-control-plane] Marking the node kindnet-688825 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0410 23:10:43.100484   66181 kubeadm.go:309] [bootstrap-token] Using token: x4i6ho.6yrkbhwmgymjc2aj
	I0410 23:10:43.102138   66181 out.go:204]   - Configuring RBAC rules ...
	I0410 23:10:43.102285   66181 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0410 23:10:43.110298   66181 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0410 23:10:43.122125   66181 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0410 23:10:43.126828   66181 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0410 23:10:43.140199   66181 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0410 23:10:43.144293   66181 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0410 23:10:43.172275   66181 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0410 23:10:43.477804   66181 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0410 23:10:43.528528   66181 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0410 23:10:43.528549   66181 kubeadm.go:309] 
	I0410 23:10:43.528604   66181 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0410 23:10:43.528608   66181 kubeadm.go:309] 
	I0410 23:10:43.528682   66181 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0410 23:10:43.528693   66181 kubeadm.go:309] 
	I0410 23:10:43.528728   66181 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0410 23:10:43.528822   66181 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0410 23:10:43.528869   66181 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0410 23:10:43.528898   66181 kubeadm.go:309] 
	I0410 23:10:43.529017   66181 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0410 23:10:43.529030   66181 kubeadm.go:309] 
	I0410 23:10:43.529090   66181 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0410 23:10:43.529106   66181 kubeadm.go:309] 
	I0410 23:10:43.529156   66181 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0410 23:10:43.529228   66181 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0410 23:10:43.529327   66181 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0410 23:10:43.529336   66181 kubeadm.go:309] 
	I0410 23:10:43.529480   66181 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0410 23:10:43.529596   66181 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0410 23:10:43.529607   66181 kubeadm.go:309] 
	I0410 23:10:43.529714   66181 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token x4i6ho.6yrkbhwmgymjc2aj \
	I0410 23:10:43.529860   66181 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f23510b5ac28f8f42919becc6f4945068a05a3fcf79470ca514b0af3de7a2bb0 \
	I0410 23:10:43.529914   66181 kubeadm.go:309] 	--control-plane 
	I0410 23:10:43.529925   66181 kubeadm.go:309] 
	I0410 23:10:43.530054   66181 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0410 23:10:43.530074   66181 kubeadm.go:309] 
	I0410 23:10:43.530197   66181 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token x4i6ho.6yrkbhwmgymjc2aj \
	I0410 23:10:43.530325   66181 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f23510b5ac28f8f42919becc6f4945068a05a3fcf79470ca514b0af3de7a2bb0 
	I0410 23:10:43.530755   66181 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 23:10:43.530800   66181 cni.go:84] Creating CNI manager for "kindnet"
	I0410 23:10:43.533868   66181 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0410 23:10:43.535168   66181 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0410 23:10:43.542366   66181 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0410 23:10:43.542390   66181 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0410 23:10:43.582310   66181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0410 23:10:44.037582   66181 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0410 23:10:44.037644   66181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 23:10:44.037710   66181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-688825 minikube.k8s.io/updated_at=2024_04_10T23_10_44_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2 minikube.k8s.io/name=kindnet-688825 minikube.k8s.io/primary=true
	I0410 23:10:44.059208   66181 ops.go:34] apiserver oom_adj: -16
	I0410 23:10:44.186553   66181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 23:10:44.686889   66181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 23:10:42.798210   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:42.798777   67722 main.go:141] libmachine: (calico-688825) DBG | unable to find current IP address of domain calico-688825 in network mk-calico-688825
	I0410 23:10:42.798807   67722 main.go:141] libmachine: (calico-688825) DBG | I0410 23:10:42.798712   67744 retry.go:31] will retry after 2.754961565s: waiting for machine to come up
	I0410 23:10:45.556491   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:45.557053   67722 main.go:141] libmachine: (calico-688825) DBG | unable to find current IP address of domain calico-688825 in network mk-calico-688825
	I0410 23:10:45.557082   67722 main.go:141] libmachine: (calico-688825) DBG | I0410 23:10:45.557012   67744 retry.go:31] will retry after 5.477755067s: waiting for machine to come up
	I0410 23:10:45.187157   66181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 23:10:45.686911   66181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 23:10:46.186951   66181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 23:10:46.686725   66181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 23:10:47.186813   66181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 23:10:47.686649   66181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 23:10:48.187064   66181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 23:10:48.687248   66181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 23:10:49.186931   66181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 23:10:49.686802   66181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 23:10:51.036547   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:51.037071   67722 main.go:141] libmachine: (calico-688825) Found IP for machine: 192.168.50.77
	I0410 23:10:51.037096   67722 main.go:141] libmachine: (calico-688825) Reserving static IP address...
	I0410 23:10:51.037106   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has current primary IP address 192.168.50.77 and MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:51.037512   67722 main.go:141] libmachine: (calico-688825) DBG | unable to find host DHCP lease matching {name: "calico-688825", mac: "52:54:00:f9:ad:c5", ip: "192.168.50.77"} in network mk-calico-688825
	I0410 23:10:51.113608   67722 main.go:141] libmachine: (calico-688825) DBG | Getting to WaitForSSH function...
	I0410 23:10:51.113641   67722 main.go:141] libmachine: (calico-688825) Reserved static IP address: 192.168.50.77
	I0410 23:10:51.113654   67722 main.go:141] libmachine: (calico-688825) Waiting for SSH to be available...
	I0410 23:10:51.116368   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:51.117045   67722 main.go:141] libmachine: (calico-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:ad:c5", ip: ""} in network mk-calico-688825: {Iface:virbr2 ExpiryTime:2024-04-11 00:10:42 +0000 UTC Type:0 Mac:52:54:00:f9:ad:c5 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f9:ad:c5}
	I0410 23:10:51.117073   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined IP address 192.168.50.77 and MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:51.117196   67722 main.go:141] libmachine: (calico-688825) DBG | Using SSH client type: external
	I0410 23:10:51.117223   67722 main.go:141] libmachine: (calico-688825) DBG | Using SSH private key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/calico-688825/id_rsa (-rw-------)
	I0410 23:10:51.117251   67722 main.go:141] libmachine: (calico-688825) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.77 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18610-5679/.minikube/machines/calico-688825/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0410 23:10:51.117276   67722 main.go:141] libmachine: (calico-688825) DBG | About to run SSH command:
	I0410 23:10:51.117299   67722 main.go:141] libmachine: (calico-688825) DBG | exit 0
	I0410 23:10:51.248618   67722 main.go:141] libmachine: (calico-688825) DBG | SSH cmd err, output: <nil>: 
	I0410 23:10:51.248870   67722 main.go:141] libmachine: (calico-688825) KVM machine creation complete!
	I0410 23:10:51.249228   67722 main.go:141] libmachine: (calico-688825) Calling .GetConfigRaw
	I0410 23:10:51.249788   67722 main.go:141] libmachine: (calico-688825) Calling .DriverName
	I0410 23:10:51.250021   67722 main.go:141] libmachine: (calico-688825) Calling .DriverName
	I0410 23:10:51.250205   67722 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0410 23:10:51.250228   67722 main.go:141] libmachine: (calico-688825) Calling .GetState
	I0410 23:10:51.251703   67722 main.go:141] libmachine: Detecting operating system of created instance...
	I0410 23:10:51.251718   67722 main.go:141] libmachine: Waiting for SSH to be available...
	I0410 23:10:51.251723   67722 main.go:141] libmachine: Getting to WaitForSSH function...
	I0410 23:10:51.251730   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHHostname
	I0410 23:10:51.254353   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:51.254711   67722 main.go:141] libmachine: (calico-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:ad:c5", ip: ""} in network mk-calico-688825: {Iface:virbr2 ExpiryTime:2024-04-11 00:10:42 +0000 UTC Type:0 Mac:52:54:00:f9:ad:c5 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:calico-688825 Clientid:01:52:54:00:f9:ad:c5}
	I0410 23:10:51.254736   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined IP address 192.168.50.77 and MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:51.254890   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHPort
	I0410 23:10:51.255091   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHKeyPath
	I0410 23:10:51.255246   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHKeyPath
	I0410 23:10:51.255393   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHUsername
	I0410 23:10:51.255567   67722 main.go:141] libmachine: Using SSH client type: native
	I0410 23:10:51.255757   67722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0410 23:10:51.255773   67722 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0410 23:10:51.371990   67722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 23:10:51.372016   67722 main.go:141] libmachine: Detecting the provisioner...
	I0410 23:10:51.372026   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHHostname
	I0410 23:10:51.375151   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:51.375525   67722 main.go:141] libmachine: (calico-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:ad:c5", ip: ""} in network mk-calico-688825: {Iface:virbr2 ExpiryTime:2024-04-11 00:10:42 +0000 UTC Type:0 Mac:52:54:00:f9:ad:c5 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:calico-688825 Clientid:01:52:54:00:f9:ad:c5}
	I0410 23:10:51.375579   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined IP address 192.168.50.77 and MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:51.375833   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHPort
	I0410 23:10:51.376036   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHKeyPath
	I0410 23:10:51.376220   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHKeyPath
	I0410 23:10:51.376380   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHUsername
	I0410 23:10:51.376559   67722 main.go:141] libmachine: Using SSH client type: native
	I0410 23:10:51.376776   67722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0410 23:10:51.376789   67722 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0410 23:10:51.493780   67722 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0410 23:10:51.493849   67722 main.go:141] libmachine: found compatible host: buildroot
	I0410 23:10:51.493861   67722 main.go:141] libmachine: Provisioning with buildroot...
	I0410 23:10:51.493881   67722 main.go:141] libmachine: (calico-688825) Calling .GetMachineName
	I0410 23:10:51.494193   67722 buildroot.go:166] provisioning hostname "calico-688825"
	I0410 23:10:51.494224   67722 main.go:141] libmachine: (calico-688825) Calling .GetMachineName
	I0410 23:10:51.494407   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHHostname
	I0410 23:10:51.497215   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:51.497722   67722 main.go:141] libmachine: (calico-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:ad:c5", ip: ""} in network mk-calico-688825: {Iface:virbr2 ExpiryTime:2024-04-11 00:10:42 +0000 UTC Type:0 Mac:52:54:00:f9:ad:c5 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:calico-688825 Clientid:01:52:54:00:f9:ad:c5}
	I0410 23:10:51.497753   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined IP address 192.168.50.77 and MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:51.497898   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHPort
	I0410 23:10:51.498054   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHKeyPath
	I0410 23:10:51.498204   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHKeyPath
	I0410 23:10:51.498375   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHUsername
	I0410 23:10:51.498532   67722 main.go:141] libmachine: Using SSH client type: native
	I0410 23:10:51.498757   67722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0410 23:10:51.498779   67722 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-688825 && echo "calico-688825" | sudo tee /etc/hostname
	I0410 23:10:51.628319   67722 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-688825
	
	I0410 23:10:51.628352   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHHostname
	I0410 23:10:51.631153   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:51.631628   67722 main.go:141] libmachine: (calico-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:ad:c5", ip: ""} in network mk-calico-688825: {Iface:virbr2 ExpiryTime:2024-04-11 00:10:42 +0000 UTC Type:0 Mac:52:54:00:f9:ad:c5 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:calico-688825 Clientid:01:52:54:00:f9:ad:c5}
	I0410 23:10:51.631657   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined IP address 192.168.50.77 and MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:51.631920   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHPort
	I0410 23:10:51.632138   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHKeyPath
	I0410 23:10:51.632341   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHKeyPath
	I0410 23:10:51.632518   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHUsername
	I0410 23:10:51.632773   67722 main.go:141] libmachine: Using SSH client type: native
	I0410 23:10:51.632981   67722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0410 23:10:51.632999   67722 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-688825' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-688825/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-688825' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 23:10:51.764066   67722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 23:10:51.764096   67722 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 23:10:51.764114   67722 buildroot.go:174] setting up certificates
	I0410 23:10:51.764150   67722 provision.go:84] configureAuth start
	I0410 23:10:51.764162   67722 main.go:141] libmachine: (calico-688825) Calling .GetMachineName
	I0410 23:10:51.764467   67722 main.go:141] libmachine: (calico-688825) Calling .GetIP
	I0410 23:10:51.767196   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:51.767538   67722 main.go:141] libmachine: (calico-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:ad:c5", ip: ""} in network mk-calico-688825: {Iface:virbr2 ExpiryTime:2024-04-11 00:10:42 +0000 UTC Type:0 Mac:52:54:00:f9:ad:c5 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:calico-688825 Clientid:01:52:54:00:f9:ad:c5}
	I0410 23:10:51.767582   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined IP address 192.168.50.77 and MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:51.767707   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHHostname
	I0410 23:10:51.770095   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:51.770469   67722 main.go:141] libmachine: (calico-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:ad:c5", ip: ""} in network mk-calico-688825: {Iface:virbr2 ExpiryTime:2024-04-11 00:10:42 +0000 UTC Type:0 Mac:52:54:00:f9:ad:c5 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:calico-688825 Clientid:01:52:54:00:f9:ad:c5}
	I0410 23:10:51.770494   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined IP address 192.168.50.77 and MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:51.770659   67722 provision.go:143] copyHostCerts
	I0410 23:10:51.770707   67722 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 23:10:51.770727   67722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 23:10:51.770802   67722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 23:10:51.770920   67722 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 23:10:51.770946   67722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 23:10:51.770975   67722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 23:10:51.771059   67722 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 23:10:51.771070   67722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 23:10:51.771099   67722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 23:10:51.771178   67722 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.calico-688825 san=[127.0.0.1 192.168.50.77 calico-688825 localhost minikube]
	I0410 23:10:52.114691   67722 provision.go:177] copyRemoteCerts
	I0410 23:10:52.114746   67722 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 23:10:52.114767   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHHostname
	I0410 23:10:52.117592   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:52.117915   67722 main.go:141] libmachine: (calico-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:ad:c5", ip: ""} in network mk-calico-688825: {Iface:virbr2 ExpiryTime:2024-04-11 00:10:42 +0000 UTC Type:0 Mac:52:54:00:f9:ad:c5 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:calico-688825 Clientid:01:52:54:00:f9:ad:c5}
	I0410 23:10:52.117938   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined IP address 192.168.50.77 and MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:52.118161   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHPort
	I0410 23:10:52.118352   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHKeyPath
	I0410 23:10:52.118534   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHUsername
	I0410 23:10:52.118689   67722 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/calico-688825/id_rsa Username:docker}
	I0410 23:10:52.211718   67722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 23:10:52.242756   67722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0410 23:10:52.274563   67722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0410 23:10:52.303158   67722 provision.go:87] duration metric: took 538.995608ms to configureAuth
	I0410 23:10:52.303185   67722 buildroot.go:189] setting minikube options for container-runtime
	I0410 23:10:52.303340   67722 config.go:182] Loaded profile config "calico-688825": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 23:10:52.303415   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHHostname
	I0410 23:10:52.306531   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:52.306920   67722 main.go:141] libmachine: (calico-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:ad:c5", ip: ""} in network mk-calico-688825: {Iface:virbr2 ExpiryTime:2024-04-11 00:10:42 +0000 UTC Type:0 Mac:52:54:00:f9:ad:c5 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:calico-688825 Clientid:01:52:54:00:f9:ad:c5}
	I0410 23:10:52.306950   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined IP address 192.168.50.77 and MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:52.307145   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHPort
	I0410 23:10:52.307357   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHKeyPath
	I0410 23:10:52.307555   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHKeyPath
	I0410 23:10:52.307752   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHUsername
	I0410 23:10:52.307970   67722 main.go:141] libmachine: Using SSH client type: native
	I0410 23:10:52.308206   67722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0410 23:10:52.308231   67722 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 23:10:52.602562   67722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 23:10:52.602598   67722 main.go:141] libmachine: Checking connection to Docker...
	I0410 23:10:52.602611   67722 main.go:141] libmachine: (calico-688825) Calling .GetURL
	I0410 23:10:52.603903   67722 main.go:141] libmachine: (calico-688825) DBG | Using libvirt version 6000000
	I0410 23:10:52.606073   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:52.606410   67722 main.go:141] libmachine: (calico-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:ad:c5", ip: ""} in network mk-calico-688825: {Iface:virbr2 ExpiryTime:2024-04-11 00:10:42 +0000 UTC Type:0 Mac:52:54:00:f9:ad:c5 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:calico-688825 Clientid:01:52:54:00:f9:ad:c5}
	I0410 23:10:52.606445   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined IP address 192.168.50.77 and MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:52.606581   67722 main.go:141] libmachine: Docker is up and running!
	I0410 23:10:52.606601   67722 main.go:141] libmachine: Reticulating splines...
	I0410 23:10:52.606608   67722 client.go:171] duration metric: took 26.506799688s to LocalClient.Create
	I0410 23:10:52.606634   67722 start.go:167] duration metric: took 26.506903945s to libmachine.API.Create "calico-688825"
	I0410 23:10:52.606647   67722 start.go:293] postStartSetup for "calico-688825" (driver="kvm2")
	I0410 23:10:52.606660   67722 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 23:10:52.606682   67722 main.go:141] libmachine: (calico-688825) Calling .DriverName
	I0410 23:10:52.606935   67722 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 23:10:52.606962   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHHostname
	I0410 23:10:52.609147   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:52.609507   67722 main.go:141] libmachine: (calico-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:ad:c5", ip: ""} in network mk-calico-688825: {Iface:virbr2 ExpiryTime:2024-04-11 00:10:42 +0000 UTC Type:0 Mac:52:54:00:f9:ad:c5 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:calico-688825 Clientid:01:52:54:00:f9:ad:c5}
	I0410 23:10:52.609535   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined IP address 192.168.50.77 and MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:52.609671   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHPort
	I0410 23:10:52.609868   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHKeyPath
	I0410 23:10:52.610078   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHUsername
	I0410 23:10:52.610225   67722 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/calico-688825/id_rsa Username:docker}
	I0410 23:10:52.701680   67722 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 23:10:52.706354   67722 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 23:10:52.706384   67722 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 23:10:52.706459   67722 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 23:10:52.706544   67722 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 23:10:52.706626   67722 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 23:10:52.717588   67722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 23:10:52.744912   67722 start.go:296] duration metric: took 138.251522ms for postStartSetup
	I0410 23:10:52.744961   67722 main.go:141] libmachine: (calico-688825) Calling .GetConfigRaw
	I0410 23:10:52.745544   67722 main.go:141] libmachine: (calico-688825) Calling .GetIP
	I0410 23:10:52.748281   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:52.748655   67722 main.go:141] libmachine: (calico-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:ad:c5", ip: ""} in network mk-calico-688825: {Iface:virbr2 ExpiryTime:2024-04-11 00:10:42 +0000 UTC Type:0 Mac:52:54:00:f9:ad:c5 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:calico-688825 Clientid:01:52:54:00:f9:ad:c5}
	I0410 23:10:52.748685   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined IP address 192.168.50.77 and MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:52.748877   67722 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/calico-688825/config.json ...
	I0410 23:10:52.749063   67722 start.go:128] duration metric: took 26.669469462s to createHost
	I0410 23:10:52.749086   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHHostname
	I0410 23:10:52.751440   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:52.751899   67722 main.go:141] libmachine: (calico-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:ad:c5", ip: ""} in network mk-calico-688825: {Iface:virbr2 ExpiryTime:2024-04-11 00:10:42 +0000 UTC Type:0 Mac:52:54:00:f9:ad:c5 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:calico-688825 Clientid:01:52:54:00:f9:ad:c5}
	I0410 23:10:52.751924   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined IP address 192.168.50.77 and MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:52.752097   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHPort
	I0410 23:10:52.752298   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHKeyPath
	I0410 23:10:52.752470   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHKeyPath
	I0410 23:10:52.752603   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHUsername
	I0410 23:10:52.752772   67722 main.go:141] libmachine: Using SSH client type: native
	I0410 23:10:52.753007   67722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0410 23:10:52.753023   67722 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 23:10:52.873970   67722 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712790652.821559182
	
	I0410 23:10:52.873990   67722 fix.go:216] guest clock: 1712790652.821559182
	I0410 23:10:52.874000   67722 fix.go:229] Guest: 2024-04-10 23:10:52.821559182 +0000 UTC Remote: 2024-04-10 23:10:52.749076016 +0000 UTC m=+26.803236622 (delta=72.483166ms)
	I0410 23:10:52.874044   67722 fix.go:200] guest clock delta is within tolerance: 72.483166ms
	I0410 23:10:52.874054   67722 start.go:83] releasing machines lock for "calico-688825", held for 26.794573923s
	I0410 23:10:52.874076   67722 main.go:141] libmachine: (calico-688825) Calling .DriverName
	I0410 23:10:52.874328   67722 main.go:141] libmachine: (calico-688825) Calling .GetIP
	I0410 23:10:52.877261   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:52.877647   67722 main.go:141] libmachine: (calico-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:ad:c5", ip: ""} in network mk-calico-688825: {Iface:virbr2 ExpiryTime:2024-04-11 00:10:42 +0000 UTC Type:0 Mac:52:54:00:f9:ad:c5 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:calico-688825 Clientid:01:52:54:00:f9:ad:c5}
	I0410 23:10:52.877686   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined IP address 192.168.50.77 and MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:52.877833   67722 main.go:141] libmachine: (calico-688825) Calling .DriverName
	I0410 23:10:52.878427   67722 main.go:141] libmachine: (calico-688825) Calling .DriverName
	I0410 23:10:52.878617   67722 main.go:141] libmachine: (calico-688825) Calling .DriverName
	I0410 23:10:52.878690   67722 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 23:10:52.878736   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHHostname
	I0410 23:10:52.878886   67722 ssh_runner.go:195] Run: cat /version.json
	I0410 23:10:52.878912   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHHostname
	I0410 23:10:52.881532   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:52.881758   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:52.881908   67722 main.go:141] libmachine: (calico-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:ad:c5", ip: ""} in network mk-calico-688825: {Iface:virbr2 ExpiryTime:2024-04-11 00:10:42 +0000 UTC Type:0 Mac:52:54:00:f9:ad:c5 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:calico-688825 Clientid:01:52:54:00:f9:ad:c5}
	I0410 23:10:52.881930   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined IP address 192.168.50.77 and MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:52.882067   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHPort
	I0410 23:10:52.882226   67722 main.go:141] libmachine: (calico-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:ad:c5", ip: ""} in network mk-calico-688825: {Iface:virbr2 ExpiryTime:2024-04-11 00:10:42 +0000 UTC Type:0 Mac:52:54:00:f9:ad:c5 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:calico-688825 Clientid:01:52:54:00:f9:ad:c5}
	I0410 23:10:52.882245   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined IP address 192.168.50.77 and MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:52.882272   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHKeyPath
	I0410 23:10:52.882453   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHUsername
	I0410 23:10:52.882487   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHPort
	I0410 23:10:52.882583   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHKeyPath
	I0410 23:10:52.882644   67722 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/calico-688825/id_rsa Username:docker}
	I0410 23:10:52.882738   67722 main.go:141] libmachine: (calico-688825) Calling .GetSSHUsername
	I0410 23:10:52.882861   67722 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/calico-688825/id_rsa Username:docker}
	I0410 23:10:53.006239   67722 ssh_runner.go:195] Run: systemctl --version
	I0410 23:10:53.013018   67722 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 23:10:53.177935   67722 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 23:10:53.185090   67722 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 23:10:53.185173   67722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 23:10:53.206956   67722 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0410 23:10:53.206984   67722 start.go:494] detecting cgroup driver to use...
	I0410 23:10:53.207049   67722 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 23:10:53.225453   67722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 23:10:53.244009   67722 docker.go:217] disabling cri-docker service (if available) ...
	I0410 23:10:53.244077   67722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 23:10:53.262958   67722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 23:10:53.280594   67722 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 23:10:53.409593   67722 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 23:10:53.578867   67722 docker.go:233] disabling docker service ...
	I0410 23:10:53.578957   67722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 23:10:53.594439   67722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 23:10:53.610628   67722 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 23:10:53.749913   67722 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 23:10:53.881515   67722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 23:10:53.896154   67722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 23:10:53.917481   67722 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0410 23:10:53.917543   67722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 23:10:53.929388   67722 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 23:10:53.929456   67722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 23:10:53.940890   67722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 23:10:53.952461   67722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 23:10:53.963594   67722 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 23:10:53.975172   67722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 23:10:53.986877   67722 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 23:10:54.006536   67722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 23:10:54.018085   67722 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 23:10:54.028224   67722 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0410 23:10:54.028284   67722 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0410 23:10:54.041942   67722 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 23:10:54.052700   67722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 23:10:54.189566   67722 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 23:10:54.354238   67722 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 23:10:54.354335   67722 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 23:10:54.360339   67722 start.go:562] Will wait 60s for crictl version
	I0410 23:10:54.360434   67722 ssh_runner.go:195] Run: which crictl
	I0410 23:10:54.364929   67722 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 23:10:54.402331   67722 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 23:10:54.402485   67722 ssh_runner.go:195] Run: crio --version
	I0410 23:10:54.440219   67722 ssh_runner.go:195] Run: crio --version
	I0410 23:10:54.474354   67722 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0410 23:10:50.187345   66181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 23:10:50.687022   66181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 23:10:51.187335   66181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 23:10:51.686630   66181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 23:10:52.187076   66181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 23:10:52.687385   66181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 23:10:53.186798   66181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 23:10:53.686700   66181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 23:10:54.186691   66181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 23:10:54.687007   66181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 23:10:55.187502   66181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 23:10:55.687538   66181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 23:10:55.830644   66181 kubeadm.go:1107] duration metric: took 11.793051708s to wait for elevateKubeSystemPrivileges
	W0410 23:10:55.830686   66181 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0410 23:10:55.830695   66181 kubeadm.go:393] duration metric: took 23.861391059s to StartCluster
	I0410 23:10:55.830716   66181 settings.go:142] acquiring lock: {Name:mk5dc8e9a07a91433645b19ffba859d70c73be71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 23:10:55.830795   66181 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 23:10:55.832998   66181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/kubeconfig: {Name:mkd6f498bec10d7b0d2291f83f5e27766227bc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 23:10:55.833292   66181 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0410 23:10:55.834947   66181 out.go:177] * Verifying Kubernetes components...
	I0410 23:10:55.833406   66181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0410 23:10:55.833424   66181 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0410 23:10:55.833624   66181 config.go:182] Loaded profile config "kindnet-688825": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 23:10:55.836200   66181 addons.go:69] Setting storage-provisioner=true in profile "kindnet-688825"
	I0410 23:10:55.836241   66181 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 23:10:55.836249   66181 addons.go:69] Setting default-storageclass=true in profile "kindnet-688825"
	I0410 23:10:55.836274   66181 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-688825"
	I0410 23:10:55.836243   66181 addons.go:234] Setting addon storage-provisioner=true in "kindnet-688825"
	I0410 23:10:55.836323   66181 host.go:66] Checking if "kindnet-688825" exists ...
	I0410 23:10:55.836756   66181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 23:10:55.836774   66181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 23:10:55.836779   66181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 23:10:55.836792   66181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 23:10:55.855505   66181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35563
	I0410 23:10:55.855736   66181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34309
	I0410 23:10:55.856105   66181 main.go:141] libmachine: () Calling .GetVersion
	I0410 23:10:55.856206   66181 main.go:141] libmachine: () Calling .GetVersion
	I0410 23:10:55.856695   66181 main.go:141] libmachine: Using API Version  1
	I0410 23:10:55.856713   66181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 23:10:55.857099   66181 main.go:141] libmachine: () Calling .GetMachineName
	I0410 23:10:55.857640   66181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 23:10:55.857662   66181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 23:10:55.857956   66181 main.go:141] libmachine: Using API Version  1
	I0410 23:10:55.857968   66181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 23:10:55.858693   66181 main.go:141] libmachine: () Calling .GetMachineName
	I0410 23:10:55.858912   66181 main.go:141] libmachine: (kindnet-688825) Calling .GetState
	I0410 23:10:55.863725   66181 addons.go:234] Setting addon default-storageclass=true in "kindnet-688825"
	I0410 23:10:55.863775   66181 host.go:66] Checking if "kindnet-688825" exists ...
	I0410 23:10:55.864153   66181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 23:10:55.864189   66181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 23:10:55.878494   66181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40991
	I0410 23:10:55.878935   66181 main.go:141] libmachine: () Calling .GetVersion
	I0410 23:10:55.879424   66181 main.go:141] libmachine: Using API Version  1
	I0410 23:10:55.879438   66181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 23:10:55.880443   66181 main.go:141] libmachine: () Calling .GetMachineName
	I0410 23:10:55.880703   66181 main.go:141] libmachine: (kindnet-688825) Calling .GetState
	I0410 23:10:55.883414   66181 main.go:141] libmachine: (kindnet-688825) Calling .DriverName
	I0410 23:10:55.885908   66181 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 23:10:54.475625   67722 main.go:141] libmachine: (calico-688825) Calling .GetIP
	I0410 23:10:54.478337   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:54.478681   67722 main.go:141] libmachine: (calico-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:ad:c5", ip: ""} in network mk-calico-688825: {Iface:virbr2 ExpiryTime:2024-04-11 00:10:42 +0000 UTC Type:0 Mac:52:54:00:f9:ad:c5 Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:calico-688825 Clientid:01:52:54:00:f9:ad:c5}
	I0410 23:10:54.478711   67722 main.go:141] libmachine: (calico-688825) DBG | domain calico-688825 has defined IP address 192.168.50.77 and MAC address 52:54:00:f9:ad:c5 in network mk-calico-688825
	I0410 23:10:54.478997   67722 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0410 23:10:54.483696   67722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 23:10:54.496950   67722 kubeadm.go:877] updating cluster {Name:calico-688825 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:calico-688825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.50.77 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 23:10:54.497054   67722 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 23:10:54.497094   67722 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 23:10:54.534896   67722 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0410 23:10:54.534977   67722 ssh_runner.go:195] Run: which lz4
	I0410 23:10:54.539636   67722 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0410 23:10:54.544431   67722 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0410 23:10:54.544470   67722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0410 23:10:55.884292   66181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41137
	I0410 23:10:55.887433   66181 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 23:10:55.887462   66181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0410 23:10:55.887482   66181 main.go:141] libmachine: (kindnet-688825) Calling .GetSSHHostname
	I0410 23:10:55.888135   66181 main.go:141] libmachine: () Calling .GetVersion
	I0410 23:10:55.888789   66181 main.go:141] libmachine: Using API Version  1
	I0410 23:10:55.888808   66181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 23:10:55.889216   66181 main.go:141] libmachine: () Calling .GetMachineName
	I0410 23:10:55.889842   66181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 23:10:55.889872   66181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 23:10:55.891929   66181 main.go:141] libmachine: (kindnet-688825) DBG | domain kindnet-688825 has defined MAC address 52:54:00:29:4d:75 in network mk-kindnet-688825
	I0410 23:10:55.892364   66181 main.go:141] libmachine: (kindnet-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:4d:75", ip: ""} in network mk-kindnet-688825: {Iface:virbr1 ExpiryTime:2024-04-11 00:10:10 +0000 UTC Type:0 Mac:52:54:00:29:4d:75 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:kindnet-688825 Clientid:01:52:54:00:29:4d:75}
	I0410 23:10:55.892378   66181 main.go:141] libmachine: (kindnet-688825) DBG | domain kindnet-688825 has defined IP address 192.168.61.225 and MAC address 52:54:00:29:4d:75 in network mk-kindnet-688825
	I0410 23:10:55.892732   66181 main.go:141] libmachine: (kindnet-688825) Calling .GetSSHPort
	I0410 23:10:55.892904   66181 main.go:141] libmachine: (kindnet-688825) Calling .GetSSHKeyPath
	I0410 23:10:55.893093   66181 main.go:141] libmachine: (kindnet-688825) Calling .GetSSHUsername
	I0410 23:10:55.893220   66181 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/kindnet-688825/id_rsa Username:docker}
	I0410 23:10:55.911558   66181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42325
	I0410 23:10:55.912002   66181 main.go:141] libmachine: () Calling .GetVersion
	I0410 23:10:55.912643   66181 main.go:141] libmachine: Using API Version  1
	I0410 23:10:55.912672   66181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 23:10:55.913124   66181 main.go:141] libmachine: () Calling .GetMachineName
	I0410 23:10:55.913332   66181 main.go:141] libmachine: (kindnet-688825) Calling .GetState
	I0410 23:10:55.915841   66181 main.go:141] libmachine: (kindnet-688825) Calling .DriverName
	I0410 23:10:55.916158   66181 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0410 23:10:55.916177   66181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0410 23:10:55.916196   66181 main.go:141] libmachine: (kindnet-688825) Calling .GetSSHHostname
	I0410 23:10:55.919769   66181 main.go:141] libmachine: (kindnet-688825) DBG | domain kindnet-688825 has defined MAC address 52:54:00:29:4d:75 in network mk-kindnet-688825
	I0410 23:10:55.920289   66181 main.go:141] libmachine: (kindnet-688825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:4d:75", ip: ""} in network mk-kindnet-688825: {Iface:virbr1 ExpiryTime:2024-04-11 00:10:10 +0000 UTC Type:0 Mac:52:54:00:29:4d:75 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:kindnet-688825 Clientid:01:52:54:00:29:4d:75}
	I0410 23:10:55.920312   66181 main.go:141] libmachine: (kindnet-688825) DBG | domain kindnet-688825 has defined IP address 192.168.61.225 and MAC address 52:54:00:29:4d:75 in network mk-kindnet-688825
	I0410 23:10:55.920666   66181 main.go:141] libmachine: (kindnet-688825) Calling .GetSSHPort
	I0410 23:10:55.920873   66181 main.go:141] libmachine: (kindnet-688825) Calling .GetSSHKeyPath
	I0410 23:10:55.921058   66181 main.go:141] libmachine: (kindnet-688825) Calling .GetSSHUsername
	I0410 23:10:55.921274   66181 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/kindnet-688825/id_rsa Username:docker}
	I0410 23:10:56.232112   66181 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 23:10:56.232175   66181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0410 23:10:56.321039   66181 node_ready.go:35] waiting up to 15m0s for node "kindnet-688825" to be "Ready" ...
	I0410 23:10:56.330013   66181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0410 23:10:56.365877   66181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 23:10:57.071322   66181 start.go:946] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0410 23:10:57.071399   66181 main.go:141] libmachine: Making call to close driver server
	I0410 23:10:57.071469   66181 main.go:141] libmachine: (kindnet-688825) Calling .Close
	I0410 23:10:57.071788   66181 main.go:141] libmachine: Successfully made call to close driver server
	I0410 23:10:57.071820   66181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 23:10:57.071838   66181 main.go:141] libmachine: Making call to close driver server
	I0410 23:10:57.071858   66181 main.go:141] libmachine: (kindnet-688825) Calling .Close
	I0410 23:10:57.072149   66181 main.go:141] libmachine: Successfully made call to close driver server
	I0410 23:10:57.072162   66181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 23:10:57.138194   66181 main.go:141] libmachine: Making call to close driver server
	I0410 23:10:57.138285   66181 main.go:141] libmachine: (kindnet-688825) Calling .Close
	I0410 23:10:57.138618   66181 main.go:141] libmachine: Successfully made call to close driver server
	I0410 23:10:57.138632   66181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 23:10:57.488903   66181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.122979276s)
	I0410 23:10:57.488961   66181 main.go:141] libmachine: Making call to close driver server
	I0410 23:10:57.488973   66181 main.go:141] libmachine: (kindnet-688825) Calling .Close
	I0410 23:10:57.489393   66181 main.go:141] libmachine: (kindnet-688825) DBG | Closing plugin on server side
	I0410 23:10:57.489416   66181 main.go:141] libmachine: Successfully made call to close driver server
	I0410 23:10:57.489427   66181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 23:10:57.489451   66181 main.go:141] libmachine: Making call to close driver server
	I0410 23:10:57.489464   66181 main.go:141] libmachine: (kindnet-688825) Calling .Close
	I0410 23:10:57.489743   66181 main.go:141] libmachine: Successfully made call to close driver server
	I0410 23:10:57.489765   66181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 23:10:57.498610   66181 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0410 23:10:57.517508   66181 addons.go:505] duration metric: took 1.684080838s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0410 23:10:57.876664   66181 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kindnet-688825" context rescaled to 1 replicas
	I0410 23:10:58.686434   66181 node_ready.go:53] node "kindnet-688825" has status "Ready":"False"
	I0410 23:10:56.233641   67722 crio.go:462] duration metric: took 1.694049852s to copy over tarball
	I0410 23:10:56.233705   67722 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0410 23:10:59.124681   67722 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.890938996s)
	I0410 23:10:59.124712   67722 crio.go:469] duration metric: took 2.891047332s to extract the tarball
	I0410 23:10:59.124720   67722 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0410 23:10:59.163263   67722 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 23:10:59.216593   67722 crio.go:514] all images are preloaded for cri-o runtime.
	I0410 23:10:59.216621   67722 cache_images.go:84] Images are preloaded, skipping loading
	I0410 23:10:59.216630   67722 kubeadm.go:928] updating node { 192.168.50.77 8443 v1.29.3 crio true true} ...
	I0410 23:10:59.216757   67722 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-688825 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:calico-688825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0410 23:10:59.216842   67722 ssh_runner.go:195] Run: crio config
	I0410 23:10:59.277631   67722 cni.go:84] Creating CNI manager for "calico"
	I0410 23:10:59.277661   67722 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 23:10:59.277686   67722 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.77 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-688825 NodeName:calico-688825 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0410 23:10:59.277930   67722 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.77
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-688825"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.77
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 23:10:59.278004   67722 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0410 23:10:59.295342   67722 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 23:10:59.295417   67722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 23:10:59.308123   67722 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0410 23:10:59.326967   67722 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0410 23:10:59.345843   67722 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0410 23:10:59.366130   67722 ssh_runner.go:195] Run: grep 192.168.50.77	control-plane.minikube.internal$ /etc/hosts
	I0410 23:10:59.370763   67722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.77	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 23:10:59.384114   67722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 23:10:59.517525   67722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 23:10:59.536450   67722 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/calico-688825 for IP: 192.168.50.77
	I0410 23:10:59.536473   67722 certs.go:194] generating shared ca certs ...
	I0410 23:10:59.536487   67722 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 23:10:59.536643   67722 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 23:10:59.536709   67722 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 23:10:59.536721   67722 certs.go:256] generating profile certs ...
	I0410 23:10:59.536794   67722 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/calico-688825/client.key
	I0410 23:10:59.536812   67722 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/calico-688825/client.crt with IP's: []
	I0410 23:10:59.616637   67722 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/calico-688825/client.crt ...
	I0410 23:10:59.616667   67722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/calico-688825/client.crt: {Name:mk78aea06be29460aed6da139b452abf12ba53c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 23:10:59.616842   67722 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/calico-688825/client.key ...
	I0410 23:10:59.616860   67722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/calico-688825/client.key: {Name:mkab2effafe18db3cd474f9f2784dd6c88bd3751 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 23:10:59.616995   67722 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/calico-688825/apiserver.key.96a94c60
	I0410 23:10:59.617022   67722 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/calico-688825/apiserver.crt.96a94c60 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.77]
	I0410 23:10:59.855230   67722 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/calico-688825/apiserver.crt.96a94c60 ...
	I0410 23:10:59.855259   67722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/calico-688825/apiserver.crt.96a94c60: {Name:mk779eb68ab6cd9bd28d4f14793092db4dd67a7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 23:10:59.855426   67722 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/calico-688825/apiserver.key.96a94c60 ...
	I0410 23:10:59.855446   67722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/calico-688825/apiserver.key.96a94c60: {Name:mkbf52b722a29fddf8861965794b62fbd6ea385d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 23:10:59.855529   67722 certs.go:381] copying /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/calico-688825/apiserver.crt.96a94c60 -> /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/calico-688825/apiserver.crt
	I0410 23:10:59.855620   67722 certs.go:385] copying /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/calico-688825/apiserver.key.96a94c60 -> /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/calico-688825/apiserver.key
	I0410 23:10:59.855671   67722 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/calico-688825/proxy-client.key
	I0410 23:10:59.855690   67722 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/calico-688825/proxy-client.crt with IP's: []
	I0410 23:10:59.949947   67722 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/calico-688825/proxy-client.crt ...
	I0410 23:10:59.949977   67722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/calico-688825/proxy-client.crt: {Name:mk66bbb858822641e60fe4ff9fec38261ea5af0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 23:10:59.950131   67722 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/calico-688825/proxy-client.key ...
	I0410 23:10:59.950142   67722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/calico-688825/proxy-client.key: {Name:mk235b9f80ef84108d53bbf5fcc9fe75b4d85a44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 23:10:59.950290   67722 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 23:10:59.950325   67722 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 23:10:59.950335   67722 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 23:10:59.950366   67722 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 23:10:59.950390   67722 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 23:10:59.950417   67722 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 23:10:59.950452   67722 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 23:10:59.951088   67722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 23:10:59.979125   67722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 23:11:00.006310   67722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 23:11:00.035291   67722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 23:11:00.065004   67722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/calico-688825/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0410 23:11:00.093391   67722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/calico-688825/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0410 23:11:00.119960   67722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/calico-688825/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 23:11:00.199887   67722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/calico-688825/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0410 23:11:00.235845   67722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 23:11:00.266150   67722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 23:11:00.291579   67722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 23:11:00.326509   67722 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 23:11:00.346767   67722 ssh_runner.go:195] Run: openssl version
	I0410 23:11:00.353865   67722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 23:11:00.368120   67722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 23:11:00.373194   67722 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 23:11:00.373261   67722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 23:11:00.379667   67722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 23:11:00.392587   67722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 23:11:00.404990   67722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 23:11:00.410100   67722 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 23:11:00.410161   67722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 23:11:00.417192   67722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 23:11:00.430851   67722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 23:11:00.447062   67722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 23:11:00.452523   67722 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 23:11:00.452599   67722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 23:11:00.459078   67722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 23:11:00.473249   67722 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 23:11:00.477759   67722 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0410 23:11:00.477816   67722 kubeadm.go:391] StartCluster: {Name:calico-688825 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 C
lusterName:calico-688825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.50.77 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 23:11:00.477884   67722 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 23:11:00.477928   67722 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 23:11:00.519345   67722 cri.go:89] found id: ""
	I0410 23:11:00.519425   67722 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0410 23:11:00.530915   67722 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 23:11:00.541545   67722 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 23:11:00.552678   67722 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 23:11:00.552696   67722 kubeadm.go:156] found existing configuration files:
	
	I0410 23:11:00.552758   67722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 23:11:00.563148   67722 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 23:11:00.563204   67722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 23:11:00.575298   67722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 23:11:00.588207   67722 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 23:11:00.588256   67722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 23:11:00.601149   67722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 23:11:00.613187   67722 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 23:11:00.613253   67722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 23:11:00.625644   67722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 23:11:00.637173   67722 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 23:11:00.637226   67722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 23:11:00.650004   67722 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 23:11:00.851986   67722 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 23:11:00.234927   66181 node_ready.go:49] node "kindnet-688825" has status "Ready":"True"
	I0410 23:11:00.234958   66181 node_ready.go:38] duration metric: took 3.913891159s for node "kindnet-688825" to be "Ready" ...
	I0410 23:11:00.234979   66181 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 23:11:00.440348   66181 pod_ready.go:78] waiting up to 15m0s for pod "coredns-76f75df574-29wl6" in "kube-system" namespace to be "Ready" ...
	I0410 23:11:01.948424   66181 pod_ready.go:92] pod "coredns-76f75df574-29wl6" in "kube-system" namespace has status "Ready":"True"
	I0410 23:11:01.948465   66181 pod_ready.go:81] duration metric: took 1.508071377s for pod "coredns-76f75df574-29wl6" in "kube-system" namespace to be "Ready" ...
	I0410 23:11:01.948483   66181 pod_ready.go:78] waiting up to 15m0s for pod "etcd-kindnet-688825" in "kube-system" namespace to be "Ready" ...
	I0410 23:11:01.953712   66181 pod_ready.go:92] pod "etcd-kindnet-688825" in "kube-system" namespace has status "Ready":"True"
	I0410 23:11:01.953735   66181 pod_ready.go:81] duration metric: took 5.242821ms for pod "etcd-kindnet-688825" in "kube-system" namespace to be "Ready" ...
	I0410 23:11:01.953752   66181 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-kindnet-688825" in "kube-system" namespace to be "Ready" ...
	I0410 23:11:01.966428   66181 pod_ready.go:92] pod "kube-apiserver-kindnet-688825" in "kube-system" namespace has status "Ready":"True"
	I0410 23:11:01.966455   66181 pod_ready.go:81] duration metric: took 12.694095ms for pod "kube-apiserver-kindnet-688825" in "kube-system" namespace to be "Ready" ...
	I0410 23:11:01.966469   66181 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-kindnet-688825" in "kube-system" namespace to be "Ready" ...
	I0410 23:11:01.971941   66181 pod_ready.go:92] pod "kube-controller-manager-kindnet-688825" in "kube-system" namespace has status "Ready":"True"
	I0410 23:11:01.971964   66181 pod_ready.go:81] duration metric: took 5.48693ms for pod "kube-controller-manager-kindnet-688825" in "kube-system" namespace to be "Ready" ...
	I0410 23:11:01.971975   66181 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-849s6" in "kube-system" namespace to be "Ready" ...
	I0410 23:11:01.978484   66181 pod_ready.go:92] pod "kube-proxy-849s6" in "kube-system" namespace has status "Ready":"True"
	I0410 23:11:01.978510   66181 pod_ready.go:81] duration metric: took 6.52628ms for pod "kube-proxy-849s6" in "kube-system" namespace to be "Ready" ...
	I0410 23:11:01.978523   66181 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-kindnet-688825" in "kube-system" namespace to be "Ready" ...
	I0410 23:11:02.345773   66181 pod_ready.go:92] pod "kube-scheduler-kindnet-688825" in "kube-system" namespace has status "Ready":"True"
	I0410 23:11:02.345796   66181 pod_ready.go:81] duration metric: took 367.264824ms for pod "kube-scheduler-kindnet-688825" in "kube-system" namespace to be "Ready" ...
	I0410 23:11:02.345807   66181 pod_ready.go:38] duration metric: took 2.110815916s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 23:11:02.345821   66181 api_server.go:52] waiting for apiserver process to appear ...
	I0410 23:11:02.345868   66181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 23:11:02.365454   66181 api_server.go:72] duration metric: took 6.532123819s to wait for apiserver process to appear ...
	I0410 23:11:02.365499   66181 api_server.go:88] waiting for apiserver healthz status ...
	I0410 23:11:02.365520   66181 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0410 23:11:02.370221   66181 api_server.go:279] https://192.168.61.225:8443/healthz returned 200:
	ok
	I0410 23:11:02.371405   66181 api_server.go:141] control plane version: v1.29.3
	I0410 23:11:02.371429   66181 api_server.go:131] duration metric: took 5.922553ms to wait for apiserver health ...
	I0410 23:11:02.371447   66181 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 23:11:02.549085   66181 system_pods.go:59] 8 kube-system pods found
	I0410 23:11:02.549133   66181 system_pods.go:61] "coredns-76f75df574-29wl6" [d4f15107-031d-4c0e-9302-fba962ff43bb] Running
	I0410 23:11:02.549143   66181 system_pods.go:61] "etcd-kindnet-688825" [3533ac21-b3e7-4244-83d6-5725a3715379] Running
	I0410 23:11:02.549149   66181 system_pods.go:61] "kindnet-vdq77" [8490bdd1-160b-43c7-abfa-fdbff5ead6a1] Running
	I0410 23:11:02.549153   66181 system_pods.go:61] "kube-apiserver-kindnet-688825" [a16f2736-4b34-4d7b-b5fe-2abb3f364137] Running
	I0410 23:11:02.549158   66181 system_pods.go:61] "kube-controller-manager-kindnet-688825" [2f92d1a7-2d62-484f-8cf8-fce56abe8a0e] Running
	I0410 23:11:02.549164   66181 system_pods.go:61] "kube-proxy-849s6" [a5783902-afff-4cce-a94d-65639e82b11d] Running
	I0410 23:11:02.549168   66181 system_pods.go:61] "kube-scheduler-kindnet-688825" [192033f8-ec20-4c00-b561-b3183a94e527] Running
	I0410 23:11:02.549173   66181 system_pods.go:61] "storage-provisioner" [293f3ed5-593d-42c5-9900-b7a96ab6ad81] Running
	I0410 23:11:02.549182   66181 system_pods.go:74] duration metric: took 177.727334ms to wait for pod list to return data ...
	I0410 23:11:02.549193   66181 default_sa.go:34] waiting for default service account to be created ...
	I0410 23:11:02.744710   66181 default_sa.go:45] found service account: "default"
	I0410 23:11:02.744748   66181 default_sa.go:55] duration metric: took 195.543643ms for default service account to be created ...
	I0410 23:11:02.744759   66181 system_pods.go:116] waiting for k8s-apps to be running ...
	I0410 23:11:02.951304   66181 system_pods.go:86] 8 kube-system pods found
	I0410 23:11:02.951337   66181 system_pods.go:89] "coredns-76f75df574-29wl6" [d4f15107-031d-4c0e-9302-fba962ff43bb] Running
	I0410 23:11:02.951344   66181 system_pods.go:89] "etcd-kindnet-688825" [3533ac21-b3e7-4244-83d6-5725a3715379] Running
	I0410 23:11:02.951349   66181 system_pods.go:89] "kindnet-vdq77" [8490bdd1-160b-43c7-abfa-fdbff5ead6a1] Running
	I0410 23:11:02.951356   66181 system_pods.go:89] "kube-apiserver-kindnet-688825" [a16f2736-4b34-4d7b-b5fe-2abb3f364137] Running
	I0410 23:11:02.951362   66181 system_pods.go:89] "kube-controller-manager-kindnet-688825" [2f92d1a7-2d62-484f-8cf8-fce56abe8a0e] Running
	I0410 23:11:02.951368   66181 system_pods.go:89] "kube-proxy-849s6" [a5783902-afff-4cce-a94d-65639e82b11d] Running
	I0410 23:11:02.951374   66181 system_pods.go:89] "kube-scheduler-kindnet-688825" [192033f8-ec20-4c00-b561-b3183a94e527] Running
	I0410 23:11:02.951379   66181 system_pods.go:89] "storage-provisioner" [293f3ed5-593d-42c5-9900-b7a96ab6ad81] Running
	I0410 23:11:02.951387   66181 system_pods.go:126] duration metric: took 206.620728ms to wait for k8s-apps to be running ...
	I0410 23:11:02.951396   66181 system_svc.go:44] waiting for kubelet service to be running ....
	I0410 23:11:02.951448   66181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 23:11:02.968154   66181 system_svc.go:56] duration metric: took 16.750559ms WaitForService to wait for kubelet
	I0410 23:11:02.968188   66181 kubeadm.go:576] duration metric: took 7.134863063s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 23:11:02.968212   66181 node_conditions.go:102] verifying NodePressure condition ...
	I0410 23:11:03.146572   66181 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 23:11:03.146605   66181 node_conditions.go:123] node cpu capacity is 2
	I0410 23:11:03.146621   66181 node_conditions.go:105] duration metric: took 178.403941ms to run NodePressure ...
	I0410 23:11:03.146635   66181 start.go:240] waiting for startup goroutines ...
	I0410 23:11:03.146644   66181 start.go:245] waiting for cluster config update ...
	I0410 23:11:03.146657   66181 start.go:254] writing updated cluster config ...
	I0410 23:11:03.146950   66181 ssh_runner.go:195] Run: rm -f paused
	I0410 23:11:03.205220   66181 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0410 23:11:03.207552   66181 out.go:177] * Done! kubectl is now configured to use "kindnet-688825" cluster and "default" namespace by default
	I0410 23:11:12.914614   67722 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0410 23:11:12.914693   67722 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 23:11:12.914772   67722 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 23:11:12.914924   67722 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 23:11:12.915080   67722 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 23:11:12.915163   67722 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 23:11:12.917200   67722 out.go:204]   - Generating certificates and keys ...
	I0410 23:11:12.917335   67722 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 23:11:12.917445   67722 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 23:11:12.917562   67722 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0410 23:11:12.917652   67722 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0410 23:11:12.917752   67722 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0410 23:11:12.917824   67722 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0410 23:11:12.917895   67722 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0410 23:11:12.918039   67722 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [calico-688825 localhost] and IPs [192.168.50.77 127.0.0.1 ::1]
	I0410 23:11:12.918107   67722 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0410 23:11:12.918254   67722 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [calico-688825 localhost] and IPs [192.168.50.77 127.0.0.1 ::1]
	I0410 23:11:12.918332   67722 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0410 23:11:12.918415   67722 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0410 23:11:12.918494   67722 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0410 23:11:12.918566   67722 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 23:11:12.918638   67722 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 23:11:12.918720   67722 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0410 23:11:12.918796   67722 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 23:11:12.918886   67722 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 23:11:12.918969   67722 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 23:11:12.919070   67722 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 23:11:12.919167   67722 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 23:11:12.920981   67722 out.go:204]   - Booting up control plane ...
	I0410 23:11:12.921115   67722 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 23:11:12.921235   67722 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 23:11:12.921330   67722 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 23:11:12.921478   67722 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 23:11:12.921609   67722 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 23:11:12.921661   67722 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 23:11:12.921893   67722 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0410 23:11:12.922025   67722 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.505535 seconds
	I0410 23:11:12.922183   67722 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0410 23:11:12.922370   67722 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0410 23:11:12.922483   67722 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0410 23:11:12.922706   67722 kubeadm.go:309] [mark-control-plane] Marking the node calico-688825 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0410 23:11:12.922783   67722 kubeadm.go:309] [bootstrap-token] Using token: yhd7c5.i4u1ymbanmig4vjw
	I0410 23:11:12.924690   67722 out.go:204]   - Configuring RBAC rules ...
	I0410 23:11:12.924846   67722 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0410 23:11:12.924992   67722 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0410 23:11:12.925186   67722 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0410 23:11:12.925368   67722 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0410 23:11:12.925537   67722 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0410 23:11:12.925679   67722 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0410 23:11:12.925798   67722 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0410 23:11:12.925857   67722 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0410 23:11:12.925944   67722 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0410 23:11:12.925961   67722 kubeadm.go:309] 
	I0410 23:11:12.926060   67722 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0410 23:11:12.926073   67722 kubeadm.go:309] 
	I0410 23:11:12.926181   67722 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0410 23:11:12.926196   67722 kubeadm.go:309] 
	I0410 23:11:12.926236   67722 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0410 23:11:12.926328   67722 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0410 23:11:12.926400   67722 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0410 23:11:12.926412   67722 kubeadm.go:309] 
	I0410 23:11:12.926498   67722 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0410 23:11:12.926508   67722 kubeadm.go:309] 
	I0410 23:11:12.926565   67722 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0410 23:11:12.926575   67722 kubeadm.go:309] 
	I0410 23:11:12.926653   67722 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0410 23:11:12.926758   67722 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0410 23:11:12.926844   67722 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0410 23:11:12.926849   67722 kubeadm.go:309] 
	I0410 23:11:12.926962   67722 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0410 23:11:12.927078   67722 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0410 23:11:12.927094   67722 kubeadm.go:309] 
	I0410 23:11:12.927178   67722 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token yhd7c5.i4u1ymbanmig4vjw \
	I0410 23:11:12.927291   67722 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f23510b5ac28f8f42919becc6f4945068a05a3fcf79470ca514b0af3de7a2bb0 \
	I0410 23:11:12.927320   67722 kubeadm.go:309] 	--control-plane 
	I0410 23:11:12.927326   67722 kubeadm.go:309] 
	I0410 23:11:12.927458   67722 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0410 23:11:12.927473   67722 kubeadm.go:309] 
	I0410 23:11:12.927603   67722 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token yhd7c5.i4u1ymbanmig4vjw \
	I0410 23:11:12.927776   67722 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f23510b5ac28f8f42919becc6f4945068a05a3fcf79470ca514b0af3de7a2bb0 
	I0410 23:11:12.927796   67722 cni.go:84] Creating CNI manager for "calico"
	I0410 23:11:12.929449   67722 out.go:177] * Configuring Calico (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Apr 10 23:11:14 embed-certs-706500 crio[732]: time="2024-04-10 23:11:14.883837477Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712790674883812601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=20ea4ff0-2401-4ab7-aa9d-ef08e5a9feb8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:11:14 embed-certs-706500 crio[732]: time="2024-04-10 23:11:14.884774985Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e092b908-c2d2-46e4-800c-0b303782753e name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:11:14 embed-certs-706500 crio[732]: time="2024-04-10 23:11:14.884831668Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e092b908-c2d2-46e4-800c-0b303782753e name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:11:14 embed-certs-706500 crio[732]: time="2024-04-10 23:11:14.885021580Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20860098a53a6c38bbb6118735789916a226b29170ef73a5f59b788e3e789d62,PodSandboxId:49425f3f0f3f6b7a6e493aff156f5590f340a93172980c09d58f0508792c2d4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712789647852063589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ad8e533-69ca-4eb5-9595-e6808dc0ff1a,},Annotations:map[string]string{io.kubernetes.container.hash: 9a77a63,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acb30f5e43a4c16269f8f5d80af70f51e68db7156d39cda88be08c09fc0b9603,PodSandboxId:b4dfdda9ca2105236b568781ee16a193ab337538af7a4e04193548a16506b913,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789647278456064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-bvdp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc8a326-77ef-469f-abf7-082ff8a44782,},Annotations:map[string]string{io.kubernetes.container.hash: 208fdcc3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9f7f18b77ab56e9facff46ac6daf77efb0725a434223643e10a22781c14a97,PodSandboxId:512ee3eeb792f8dbaddf11a5fcd68cb8fdab38d3bab0523f27f9851604d9d3e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789647136256389,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v2pp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21
38fb5e-9c16-4a25-85d3-3d84b361a1e8,},Annotations:map[string]string{io.kubernetes.container.hash: 39288f10,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e85003fcda80065ba08ce664b39389c139e522b6fa6d3d549aa1489480769ba,PodSandboxId:2e331650860759f26b5bfc40e8dd29b524d4d7e6a670b8968c91b07752fc587b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:
1712789646301595587,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xj5nq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1bb1878-3e4b-4647-a3a7-cb327ccbd364,},Annotations:map[string]string{io.kubernetes.container.hash: e6089a25,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24c4592bdae762071d9e3194f77a18c18a4e9892287473579e8949b855399bb7,PodSandboxId:3c76004c49eee60d1bc73391f13acef54ae33d676fb055852d74b0e044507385,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712789626759076112,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ade9b0541b33ae26f2058c883c3798,},Annotations:map[string]string{io.kubernetes.container.hash: 5f81a59e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a08ba1e285082b3e8168a800dbcfdffb0730b5e9ae2f5ca7dd4a1e41cbe5d061,PodSandboxId:f51fc7c43757ebf9dc411563a65a86b34eba8ebc9c77cfe96624c6f261c56db0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712789626718096771,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cc6ba0b7c555727afeeda8fec9bc199,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f384cebb9db6a30cc358c386a5336d6d9de64f99fc0ab767580c8cda15b52f2,PodSandboxId:ae9380b9c2fba691c02a84120e8c0b8c16e9329a3f93d2dfdd23a285f9dd72bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712789626709682024,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6695da719563f5e9d31d5ac8cc82cbd6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dd5e113a3c19da2d6de252551db5e40ec3162ff53e7078636fb2903d568adbf,PodSandboxId:31dc2b0b704c001485223edc854b0f80661499793a947799fc2c13cd5cdee36b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712789626639730707,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0f7dfba43914b0d24c75b7149fd03d7,},Annotations:map[string]string{io.kubernetes.container.hash: b8446a68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdb4f1cb7f324a695caba4a74fdffb456b9b22f56f2a3883880ec4686227e507,PodSandboxId:a12df2a5ab1a88cfc09ae4dc1bf2a27a1ef57e0dae98c6e07ecfd0292765950f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712789335198838226,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0f7dfba43914b0d24c75b7149fd03d7,},Annotations:map[string]string{io.kubernetes.container.hash: b8446a68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e092b908-c2d2-46e4-800c-0b303782753e name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:11:14 embed-certs-706500 crio[732]: time="2024-04-10 23:11:14.923777122Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5ea79668-f7d7-4dbf-9c7f-cf7bb9d98280 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:11:14 embed-certs-706500 crio[732]: time="2024-04-10 23:11:14.923872933Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5ea79668-f7d7-4dbf-9c7f-cf7bb9d98280 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:11:14 embed-certs-706500 crio[732]: time="2024-04-10 23:11:14.925212212Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6995b0e9-a0ef-4ab9-9bad-36f7aa36375b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:11:14 embed-certs-706500 crio[732]: time="2024-04-10 23:11:14.925758297Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712790674925732114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6995b0e9-a0ef-4ab9-9bad-36f7aa36375b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:11:14 embed-certs-706500 crio[732]: time="2024-04-10 23:11:14.926505528Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=74c951d8-02c0-4747-a203-bd6f2c2d87fa name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:11:14 embed-certs-706500 crio[732]: time="2024-04-10 23:11:14.926556069Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=74c951d8-02c0-4747-a203-bd6f2c2d87fa name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:11:14 embed-certs-706500 crio[732]: time="2024-04-10 23:11:14.926750481Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20860098a53a6c38bbb6118735789916a226b29170ef73a5f59b788e3e789d62,PodSandboxId:49425f3f0f3f6b7a6e493aff156f5590f340a93172980c09d58f0508792c2d4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712789647852063589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ad8e533-69ca-4eb5-9595-e6808dc0ff1a,},Annotations:map[string]string{io.kubernetes.container.hash: 9a77a63,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acb30f5e43a4c16269f8f5d80af70f51e68db7156d39cda88be08c09fc0b9603,PodSandboxId:b4dfdda9ca2105236b568781ee16a193ab337538af7a4e04193548a16506b913,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789647278456064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-bvdp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc8a326-77ef-469f-abf7-082ff8a44782,},Annotations:map[string]string{io.kubernetes.container.hash: 208fdcc3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9f7f18b77ab56e9facff46ac6daf77efb0725a434223643e10a22781c14a97,PodSandboxId:512ee3eeb792f8dbaddf11a5fcd68cb8fdab38d3bab0523f27f9851604d9d3e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789647136256389,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v2pp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21
38fb5e-9c16-4a25-85d3-3d84b361a1e8,},Annotations:map[string]string{io.kubernetes.container.hash: 39288f10,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e85003fcda80065ba08ce664b39389c139e522b6fa6d3d549aa1489480769ba,PodSandboxId:2e331650860759f26b5bfc40e8dd29b524d4d7e6a670b8968c91b07752fc587b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:
1712789646301595587,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xj5nq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1bb1878-3e4b-4647-a3a7-cb327ccbd364,},Annotations:map[string]string{io.kubernetes.container.hash: e6089a25,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24c4592bdae762071d9e3194f77a18c18a4e9892287473579e8949b855399bb7,PodSandboxId:3c76004c49eee60d1bc73391f13acef54ae33d676fb055852d74b0e044507385,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712789626759076112,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ade9b0541b33ae26f2058c883c3798,},Annotations:map[string]string{io.kubernetes.container.hash: 5f81a59e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a08ba1e285082b3e8168a800dbcfdffb0730b5e9ae2f5ca7dd4a1e41cbe5d061,PodSandboxId:f51fc7c43757ebf9dc411563a65a86b34eba8ebc9c77cfe96624c6f261c56db0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712789626718096771,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cc6ba0b7c555727afeeda8fec9bc199,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f384cebb9db6a30cc358c386a5336d6d9de64f99fc0ab767580c8cda15b52f2,PodSandboxId:ae9380b9c2fba691c02a84120e8c0b8c16e9329a3f93d2dfdd23a285f9dd72bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712789626709682024,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6695da719563f5e9d31d5ac8cc82cbd6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dd5e113a3c19da2d6de252551db5e40ec3162ff53e7078636fb2903d568adbf,PodSandboxId:31dc2b0b704c001485223edc854b0f80661499793a947799fc2c13cd5cdee36b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712789626639730707,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0f7dfba43914b0d24c75b7149fd03d7,},Annotations:map[string]string{io.kubernetes.container.hash: b8446a68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdb4f1cb7f324a695caba4a74fdffb456b9b22f56f2a3883880ec4686227e507,PodSandboxId:a12df2a5ab1a88cfc09ae4dc1bf2a27a1ef57e0dae98c6e07ecfd0292765950f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712789335198838226,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0f7dfba43914b0d24c75b7149fd03d7,},Annotations:map[string]string{io.kubernetes.container.hash: b8446a68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=74c951d8-02c0-4747-a203-bd6f2c2d87fa name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:11:14 embed-certs-706500 crio[732]: time="2024-04-10 23:11:14.968904683Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9ebf9269-fa5f-4476-8f6a-a05eb24fa703 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:11:14 embed-certs-706500 crio[732]: time="2024-04-10 23:11:14.969009076Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9ebf9269-fa5f-4476-8f6a-a05eb24fa703 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:11:14 embed-certs-706500 crio[732]: time="2024-04-10 23:11:14.970271970Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1e0e27fc-975e-4d94-adeb-29187260c020 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:11:14 embed-certs-706500 crio[732]: time="2024-04-10 23:11:14.970770138Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712790674970747328,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1e0e27fc-975e-4d94-adeb-29187260c020 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:11:14 embed-certs-706500 crio[732]: time="2024-04-10 23:11:14.971442542Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=27b62d3e-cc73-4fc8-973d-346593663fee name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:11:14 embed-certs-706500 crio[732]: time="2024-04-10 23:11:14.971531577Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=27b62d3e-cc73-4fc8-973d-346593663fee name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:11:14 embed-certs-706500 crio[732]: time="2024-04-10 23:11:14.971741305Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20860098a53a6c38bbb6118735789916a226b29170ef73a5f59b788e3e789d62,PodSandboxId:49425f3f0f3f6b7a6e493aff156f5590f340a93172980c09d58f0508792c2d4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712789647852063589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ad8e533-69ca-4eb5-9595-e6808dc0ff1a,},Annotations:map[string]string{io.kubernetes.container.hash: 9a77a63,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acb30f5e43a4c16269f8f5d80af70f51e68db7156d39cda88be08c09fc0b9603,PodSandboxId:b4dfdda9ca2105236b568781ee16a193ab337538af7a4e04193548a16506b913,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789647278456064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-bvdp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc8a326-77ef-469f-abf7-082ff8a44782,},Annotations:map[string]string{io.kubernetes.container.hash: 208fdcc3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9f7f18b77ab56e9facff46ac6daf77efb0725a434223643e10a22781c14a97,PodSandboxId:512ee3eeb792f8dbaddf11a5fcd68cb8fdab38d3bab0523f27f9851604d9d3e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789647136256389,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v2pp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21
38fb5e-9c16-4a25-85d3-3d84b361a1e8,},Annotations:map[string]string{io.kubernetes.container.hash: 39288f10,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e85003fcda80065ba08ce664b39389c139e522b6fa6d3d549aa1489480769ba,PodSandboxId:2e331650860759f26b5bfc40e8dd29b524d4d7e6a670b8968c91b07752fc587b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:
1712789646301595587,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xj5nq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1bb1878-3e4b-4647-a3a7-cb327ccbd364,},Annotations:map[string]string{io.kubernetes.container.hash: e6089a25,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24c4592bdae762071d9e3194f77a18c18a4e9892287473579e8949b855399bb7,PodSandboxId:3c76004c49eee60d1bc73391f13acef54ae33d676fb055852d74b0e044507385,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712789626759076112,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ade9b0541b33ae26f2058c883c3798,},Annotations:map[string]string{io.kubernetes.container.hash: 5f81a59e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a08ba1e285082b3e8168a800dbcfdffb0730b5e9ae2f5ca7dd4a1e41cbe5d061,PodSandboxId:f51fc7c43757ebf9dc411563a65a86b34eba8ebc9c77cfe96624c6f261c56db0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712789626718096771,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cc6ba0b7c555727afeeda8fec9bc199,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f384cebb9db6a30cc358c386a5336d6d9de64f99fc0ab767580c8cda15b52f2,PodSandboxId:ae9380b9c2fba691c02a84120e8c0b8c16e9329a3f93d2dfdd23a285f9dd72bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712789626709682024,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6695da719563f5e9d31d5ac8cc82cbd6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dd5e113a3c19da2d6de252551db5e40ec3162ff53e7078636fb2903d568adbf,PodSandboxId:31dc2b0b704c001485223edc854b0f80661499793a947799fc2c13cd5cdee36b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712789626639730707,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0f7dfba43914b0d24c75b7149fd03d7,},Annotations:map[string]string{io.kubernetes.container.hash: b8446a68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdb4f1cb7f324a695caba4a74fdffb456b9b22f56f2a3883880ec4686227e507,PodSandboxId:a12df2a5ab1a88cfc09ae4dc1bf2a27a1ef57e0dae98c6e07ecfd0292765950f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712789335198838226,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0f7dfba43914b0d24c75b7149fd03d7,},Annotations:map[string]string{io.kubernetes.container.hash: b8446a68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=27b62d3e-cc73-4fc8-973d-346593663fee name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:11:15 embed-certs-706500 crio[732]: time="2024-04-10 23:11:15.008223167Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5e9f8e94-4048-43cf-9645-77d19cbd94d6 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:11:15 embed-certs-706500 crio[732]: time="2024-04-10 23:11:15.008319714Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5e9f8e94-4048-43cf-9645-77d19cbd94d6 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:11:15 embed-certs-706500 crio[732]: time="2024-04-10 23:11:15.009980510Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b6600681-4f60-470c-b9d2-be278a0094c2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:11:15 embed-certs-706500 crio[732]: time="2024-04-10 23:11:15.010713244Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712790675010680731,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b6600681-4f60-470c-b9d2-be278a0094c2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:11:15 embed-certs-706500 crio[732]: time="2024-04-10 23:11:15.011540198Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=04b15f41-c7a9-4e9b-a70e-ec47465de889 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:11:15 embed-certs-706500 crio[732]: time="2024-04-10 23:11:15.011616529Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=04b15f41-c7a9-4e9b-a70e-ec47465de889 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:11:15 embed-certs-706500 crio[732]: time="2024-04-10 23:11:15.012022519Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20860098a53a6c38bbb6118735789916a226b29170ef73a5f59b788e3e789d62,PodSandboxId:49425f3f0f3f6b7a6e493aff156f5590f340a93172980c09d58f0508792c2d4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712789647852063589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ad8e533-69ca-4eb5-9595-e6808dc0ff1a,},Annotations:map[string]string{io.kubernetes.container.hash: 9a77a63,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acb30f5e43a4c16269f8f5d80af70f51e68db7156d39cda88be08c09fc0b9603,PodSandboxId:b4dfdda9ca2105236b568781ee16a193ab337538af7a4e04193548a16506b913,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789647278456064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-bvdp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc8a326-77ef-469f-abf7-082ff8a44782,},Annotations:map[string]string{io.kubernetes.container.hash: 208fdcc3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9f7f18b77ab56e9facff46ac6daf77efb0725a434223643e10a22781c14a97,PodSandboxId:512ee3eeb792f8dbaddf11a5fcd68cb8fdab38d3bab0523f27f9851604d9d3e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789647136256389,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v2pp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21
38fb5e-9c16-4a25-85d3-3d84b361a1e8,},Annotations:map[string]string{io.kubernetes.container.hash: 39288f10,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e85003fcda80065ba08ce664b39389c139e522b6fa6d3d549aa1489480769ba,PodSandboxId:2e331650860759f26b5bfc40e8dd29b524d4d7e6a670b8968c91b07752fc587b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:
1712789646301595587,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xj5nq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1bb1878-3e4b-4647-a3a7-cb327ccbd364,},Annotations:map[string]string{io.kubernetes.container.hash: e6089a25,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24c4592bdae762071d9e3194f77a18c18a4e9892287473579e8949b855399bb7,PodSandboxId:3c76004c49eee60d1bc73391f13acef54ae33d676fb055852d74b0e044507385,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712789626759076112,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ade9b0541b33ae26f2058c883c3798,},Annotations:map[string]string{io.kubernetes.container.hash: 5f81a59e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a08ba1e285082b3e8168a800dbcfdffb0730b5e9ae2f5ca7dd4a1e41cbe5d061,PodSandboxId:f51fc7c43757ebf9dc411563a65a86b34eba8ebc9c77cfe96624c6f261c56db0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712789626718096771,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cc6ba0b7c555727afeeda8fec9bc199,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f384cebb9db6a30cc358c386a5336d6d9de64f99fc0ab767580c8cda15b52f2,PodSandboxId:ae9380b9c2fba691c02a84120e8c0b8c16e9329a3f93d2dfdd23a285f9dd72bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712789626709682024,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6695da719563f5e9d31d5ac8cc82cbd6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dd5e113a3c19da2d6de252551db5e40ec3162ff53e7078636fb2903d568adbf,PodSandboxId:31dc2b0b704c001485223edc854b0f80661499793a947799fc2c13cd5cdee36b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712789626639730707,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0f7dfba43914b0d24c75b7149fd03d7,},Annotations:map[string]string{io.kubernetes.container.hash: b8446a68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdb4f1cb7f324a695caba4a74fdffb456b9b22f56f2a3883880ec4686227e507,PodSandboxId:a12df2a5ab1a88cfc09ae4dc1bf2a27a1ef57e0dae98c6e07ecfd0292765950f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712789335198838226,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-706500,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0f7dfba43914b0d24c75b7149fd03d7,},Annotations:map[string]string{io.kubernetes.container.hash: b8446a68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=04b15f41-c7a9-4e9b-a70e-ec47465de889 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	20860098a53a6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 minutes ago      Running             storage-provisioner       0                   49425f3f0f3f6       storage-provisioner
	acb30f5e43a4c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   17 minutes ago      Running             coredns                   0                   b4dfdda9ca210       coredns-76f75df574-bvdp5
	5b9f7f18b77ab       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   17 minutes ago      Running             coredns                   0                   512ee3eeb792f       coredns-76f75df574-v2pp5
	8e85003fcda80       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   17 minutes ago      Running             kube-proxy                0                   2e33165086075       kube-proxy-xj5nq
	24c4592bdae76       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   17 minutes ago      Running             etcd                      2                   3c76004c49eee       etcd-embed-certs-706500
	a08ba1e285082       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   17 minutes ago      Running             kube-scheduler            2                   f51fc7c43757e       kube-scheduler-embed-certs-706500
	5f384cebb9db6       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   17 minutes ago      Running             kube-controller-manager   2                   ae9380b9c2fba       kube-controller-manager-embed-certs-706500
	4dd5e113a3c19       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   17 minutes ago      Running             kube-apiserver            2                   31dc2b0b704c0       kube-apiserver-embed-certs-706500
	bdb4f1cb7f324       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   22 minutes ago      Exited              kube-apiserver            1                   a12df2a5ab1a8       kube-apiserver-embed-certs-706500
	
	
	==> coredns [5b9f7f18b77ab56e9facff46ac6daf77efb0725a434223643e10a22781c14a97] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [acb30f5e43a4c16269f8f5d80af70f51e68db7156d39cda88be08c09fc0b9603] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-706500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-706500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2
	                    minikube.k8s.io/name=embed-certs-706500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_10T22_53_53_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Apr 2024 22:53:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-706500
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Apr 2024 23:11:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Apr 2024 23:09:33 +0000   Wed, 10 Apr 2024 22:53:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Apr 2024 23:09:33 +0000   Wed, 10 Apr 2024 22:53:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Apr 2024 23:09:33 +0000   Wed, 10 Apr 2024 22:53:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Apr 2024 23:09:33 +0000   Wed, 10 Apr 2024 22:54:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.10
	  Hostname:    embed-certs-706500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 65039216ea2f4d04bceb173695a31972
	  System UUID:                65039216-ea2f-4d04-bceb-173695a31972
	  Boot ID:                    50e06d99-b932-43cf-af18-fddcec0b4877
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-bvdp5                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 coredns-76f75df574-v2pp5                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 etcd-embed-certs-706500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kube-apiserver-embed-certs-706500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-embed-certs-706500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-xj5nq                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-embed-certs-706500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 metrics-server-57f55c9bc5-9mrmz               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node embed-certs-706500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node embed-certs-706500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node embed-certs-706500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m                kubelet          Node embed-certs-706500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m                kubelet          Node embed-certs-706500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m                kubelet          Node embed-certs-706500 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             17m                kubelet          Node embed-certs-706500 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeReady                17m                kubelet          Node embed-certs-706500 status is now: NodeReady
	  Normal  RegisteredNode           17m                node-controller  Node embed-certs-706500 event: Registered Node embed-certs-706500 in Controller
	
	
	==> dmesg <==
	[  +0.054628] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043256] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.758871] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.697367] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.656672] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.610339] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.059412] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062250] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.191066] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.162016] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[  +0.349085] systemd-fstab-generator[716]: Ignoring "noauto" option for root device
	[  +4.721413] systemd-fstab-generator[813]: Ignoring "noauto" option for root device
	[  +0.066100] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.853437] systemd-fstab-generator[937]: Ignoring "noauto" option for root device
	[  +5.636785] kauditd_printk_skb: 97 callbacks suppressed
	[Apr10 22:49] kauditd_printk_skb: 81 callbacks suppressed
	[Apr10 22:53] systemd-fstab-generator[3594]: Ignoring "noauto" option for root device
	[  +0.064419] kauditd_printk_skb: 8 callbacks suppressed
	[  +7.253364] systemd-fstab-generator[3915]: Ignoring "noauto" option for root device
	[  +0.092526] kauditd_printk_skb: 54 callbacks suppressed
	[Apr10 22:54] systemd-fstab-generator[4127]: Ignoring "noauto" option for root device
	[  +0.104790] kauditd_printk_skb: 12 callbacks suppressed
	[Apr10 22:55] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [24c4592bdae762071d9e3194f77a18c18a4e9892287473579e8949b855399bb7] <==
	{"level":"info","ts":"2024-04-10T23:03:47.910567Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2286185153,"revision":716,"compact-revision":-1}
	{"level":"warn","ts":"2024-04-10T23:08:31.601952Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"228.511828ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.10\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-04-10T23:08:31.603187Z","caller":"traceutil/trace.go:171","msg":"trace[1399840512] range","detail":"{range_begin:/registry/masterleases/192.168.39.10; range_end:; response_count:1; response_revision:1189; }","duration":"229.912193ms","start":"2024-04-10T23:08:31.373231Z","end":"2024-04-10T23:08:31.603143Z","steps":["trace[1399840512] 'range keys from in-memory index tree'  (duration: 228.415394ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-10T23:08:31.912198Z","caller":"traceutil/trace.go:171","msg":"trace[249859396] transaction","detail":"{read_only:false; response_revision:1190; number_of_response:1; }","duration":"178.904401ms","start":"2024-04-10T23:08:31.733277Z","end":"2024-04-10T23:08:31.912181Z","steps":["trace[249859396] 'process raft request'  (duration: 127.301514ms)","trace[249859396] 'compare'  (duration: 51.517841ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-10T23:08:47.908697Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":959}
	{"level":"info","ts":"2024-04-10T23:08:47.913627Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":959,"took":"4.642515ms","hash":1211859425,"current-db-size-bytes":2351104,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1622016,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-04-10T23:08:47.913688Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1211859425,"revision":959,"compact-revision":716}
	{"level":"warn","ts":"2024-04-10T23:08:56.826182Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.988775ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4399610885155459650 > lease_revoke:<id:3d0e8eca37a211f2>","response":"size:28"}
	{"level":"info","ts":"2024-04-10T23:08:57.022844Z","caller":"traceutil/trace.go:171","msg":"trace[146224907] transaction","detail":"{read_only:false; response_revision:1210; number_of_response:1; }","duration":"172.624176ms","start":"2024-04-10T23:08:56.850194Z","end":"2024-04-10T23:08:57.022818Z","steps":["trace[146224907] 'process raft request'  (duration: 172.484658ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-10T23:08:57.331084Z","caller":"traceutil/trace.go:171","msg":"trace[346694739] linearizableReadLoop","detail":"{readStateIndex:1406; appliedIndex:1405; }","duration":"126.052369ms","start":"2024-04-10T23:08:57.205004Z","end":"2024-04-10T23:08:57.331057Z","steps":["trace[346694739] 'read index received'  (duration: 125.896399ms)","trace[346694739] 'applied index is now lower than readState.Index'  (duration: 155.157µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-10T23:08:57.331231Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.245039ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-10T23:08:57.331278Z","caller":"traceutil/trace.go:171","msg":"trace[375698641] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1211; }","duration":"126.330266ms","start":"2024-04-10T23:08:57.204925Z","end":"2024-04-10T23:08:57.331255Z","steps":["trace[375698641] 'agreement among raft nodes before linearized reading'  (duration: 126.248455ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-10T23:08:57.331511Z","caller":"traceutil/trace.go:171","msg":"trace[1271056666] transaction","detail":"{read_only:false; response_revision:1211; number_of_response:1; }","duration":"192.855285ms","start":"2024-04-10T23:08:57.138642Z","end":"2024-04-10T23:08:57.331497Z","steps":["trace[1271056666] 'process raft request'  (duration: 192.183477ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-10T23:09:41.773063Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.46867ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4399610885155459870 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.10\" mod_revision:1238 > success:<request_put:<key:\"/registry/masterleases/192.168.39.10\" value_size:66 lease:4399610885155459868 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.10\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-10T23:09:41.773541Z","caller":"traceutil/trace.go:171","msg":"trace[1069564276] linearizableReadLoop","detail":"{readStateIndex:1452; appliedIndex:1451; }","duration":"179.516139ms","start":"2024-04-10T23:09:41.593986Z","end":"2024-04-10T23:09:41.773503Z","steps":["trace[1069564276] 'read index received'  (duration: 47.496653ms)","trace[1069564276] 'applied index is now lower than readState.Index'  (duration: 132.01808ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-10T23:09:41.77358Z","caller":"traceutil/trace.go:171","msg":"trace[1419086088] transaction","detail":"{read_only:false; response_revision:1247; number_of_response:1; }","duration":"265.298308ms","start":"2024-04-10T23:09:41.508264Z","end":"2024-04-10T23:09:41.773562Z","steps":["trace[1419086088] 'process raft request'  (duration: 133.318269ms)","trace[1419086088] 'compare'  (duration: 130.370469ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-10T23:09:41.773757Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.752792ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1119"}
	{"level":"info","ts":"2024-04-10T23:09:41.773841Z","caller":"traceutil/trace.go:171","msg":"trace[73619146] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1247; }","duration":"179.866812ms","start":"2024-04-10T23:09:41.593963Z","end":"2024-04-10T23:09:41.77383Z","steps":["trace[73619146] 'agreement among raft nodes before linearized reading'  (duration: 179.642427ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-10T23:09:42.02223Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.706228ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4399610885155459875 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1246 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-10T23:09:42.022507Z","caller":"traceutil/trace.go:171","msg":"trace[1914555839] transaction","detail":"{read_only:false; response_revision:1248; number_of_response:1; }","duration":"243.514345ms","start":"2024-04-10T23:09:41.778978Z","end":"2024-04-10T23:09:42.022492Z","steps":["trace[1914555839] 'process raft request'  (duration: 123.462622ms)","trace[1914555839] 'compare'  (duration: 119.542173ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-10T23:10:31.50486Z","caller":"traceutil/trace.go:171","msg":"trace[409756343] linearizableReadLoop","detail":"{readStateIndex:1502; appliedIndex:1501; }","duration":"124.855386ms","start":"2024-04-10T23:10:31.379969Z","end":"2024-04-10T23:10:31.504824Z","steps":["trace[409756343] 'read index received'  (duration: 124.635416ms)","trace[409756343] 'applied index is now lower than readState.Index'  (duration: 219.447µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-10T23:10:31.505325Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.318103ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.10\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-04-10T23:10:31.505528Z","caller":"traceutil/trace.go:171","msg":"trace[1946834062] transaction","detail":"{read_only:false; response_revision:1288; number_of_response:1; }","duration":"271.184634ms","start":"2024-04-10T23:10:31.234325Z","end":"2024-04-10T23:10:31.505509Z","steps":["trace[1946834062] 'process raft request'  (duration: 270.324185ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-10T23:10:31.505579Z","caller":"traceutil/trace.go:171","msg":"trace[131649757] range","detail":"{range_begin:/registry/masterleases/192.168.39.10; range_end:; response_count:1; response_revision:1288; }","duration":"125.631517ms","start":"2024-04-10T23:10:31.379938Z","end":"2024-04-10T23:10:31.505569Z","steps":["trace[131649757] 'agreement among raft nodes before linearized reading'  (duration: 125.203071ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-10T23:10:59.842573Z","caller":"traceutil/trace.go:171","msg":"trace[2143873631] transaction","detail":"{read_only:false; response_revision:1311; number_of_response:1; }","duration":"248.928287ms","start":"2024-04-10T23:10:59.593614Z","end":"2024-04-10T23:10:59.842543Z","steps":["trace[2143873631] 'process raft request'  (duration: 248.657734ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:11:15 up 22 min,  0 users,  load average: 0.26, 0.33, 0.26
	Linux embed-certs-706500 5.10.207 #1 SMP Wed Apr 10 14:57:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4dd5e113a3c19da2d6de252551db5e40ec3162ff53e7078636fb2903d568adbf] <==
	W0410 23:06:50.616981       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 23:06:50.617027       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0410 23:06:50.617042       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0410 23:08:31.913058       1 trace.go:236] Trace[106090703]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.39.10,type:*v1.Endpoints,resource:apiServerIPInfo (10-Apr-2024 23:08:31.372) (total time: 540ms):
	Trace[106090703]: ---"initial value restored" 232ms (23:08:31.605)
	Trace[106090703]: ---"Transaction prepared" 127ms (23:08:31.732)
	Trace[106090703]: ---"Txn call completed" 180ms (23:08:31.912)
	Trace[106090703]: [540.28387ms] [540.28387ms] END
	W0410 23:08:49.618683       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 23:08:49.618801       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0410 23:08:50.619994       1 handler_proxy.go:93] no RequestInfo found in the context
	W0410 23:08:50.620125       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 23:08:50.620318       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0410 23:08:50.620384       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0410 23:08:50.620439       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0410 23:08:50.621741       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0410 23:09:50.620696       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 23:09:50.620786       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0410 23:09:50.620796       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0410 23:09:50.623030       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 23:09:50.623151       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0410 23:09:50.623195       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [bdb4f1cb7f324a695caba4a74fdffb456b9b22f56f2a3883880ec4686227e507] <==
	W0410 22:53:42.020816       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:42.123449       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:42.221930       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:42.340786       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:42.376251       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:42.397993       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:42.406160       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:42.436759       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:42.457748       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:42.479143       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:42.563277       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:42.630921       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:42.752971       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:42.817542       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:42.865904       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:42.894937       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:43.042689       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:43.079658       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:43.089799       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:43.148970       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:43.253604       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:43.279669       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:43.298940       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:43.322175       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:53:43.360739       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [5f384cebb9db6a30cc358c386a5336d6d9de64f99fc0ab767580c8cda15b52f2] <==
	I0410 23:05:35.443310       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:06:04.913101       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:06:05.456169       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:06:34.919619       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:06:35.466149       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:07:04.926083       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:07:05.478867       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:07:34.931991       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:07:35.489010       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:08:04.939429       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:08:05.500099       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:08:34.946605       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:08:35.516292       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:09:04.952577       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:09:05.530855       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:09:34.957497       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:09:35.546711       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:10:04.964769       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:10:05.557067       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0410 23:10:17.247726       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="402.351µs"
	I0410 23:10:31.509864       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="131.686µs"
	E0410 23:10:34.970468       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:10:35.565785       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:11:04.978427       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:11:05.574966       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [8e85003fcda80065ba08ce664b39389c139e522b6fa6d3d549aa1489480769ba] <==
	I0410 22:54:06.777170       1 server_others.go:72] "Using iptables proxy"
	I0410 22:54:06.793660       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.10"]
	I0410 22:54:06.865413       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0410 22:54:06.865440       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0410 22:54:06.865456       1 server_others.go:168] "Using iptables Proxier"
	I0410 22:54:06.868859       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0410 22:54:06.869088       1 server.go:865] "Version info" version="v1.29.3"
	I0410 22:54:06.869100       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0410 22:54:06.870247       1 config.go:188] "Starting service config controller"
	I0410 22:54:06.870268       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0410 22:54:06.870295       1 config.go:97] "Starting endpoint slice config controller"
	I0410 22:54:06.870299       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0410 22:54:06.870925       1 config.go:315] "Starting node config controller"
	I0410 22:54:06.870934       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0410 22:54:06.974552       1 shared_informer.go:318] Caches are synced for node config
	I0410 22:54:06.974580       1 shared_informer.go:318] Caches are synced for service config
	I0410 22:54:06.974606       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a08ba1e285082b3e8168a800dbcfdffb0730b5e9ae2f5ca7dd4a1e41cbe5d061] <==
	W0410 22:53:49.621113       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0410 22:53:49.621142       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0410 22:53:50.435056       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0410 22:53:50.435085       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0410 22:53:50.517034       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0410 22:53:50.519122       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0410 22:53:50.519320       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0410 22:53:50.519443       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0410 22:53:50.529574       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0410 22:53:50.529740       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0410 22:53:50.568183       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0410 22:53:50.568455       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0410 22:53:50.671143       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0410 22:53:50.671240       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0410 22:53:50.675036       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0410 22:53:50.675140       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0410 22:53:50.751237       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0410 22:53:50.751968       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0410 22:53:50.754817       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0410 22:53:50.754900       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0410 22:53:50.814059       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0410 22:53:50.814115       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0410 22:53:50.864163       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0410 22:53:50.864217       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0410 22:53:53.807070       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 10 23:08:53 embed-certs-706500 kubelet[3922]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 10 23:08:56 embed-certs-706500 kubelet[3922]: E0410 23:08:56.218206    3922 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9mrmz" podUID="a4ccd29a-d27e-4291-ac8c-3135d65f8a2a"
	Apr 10 23:09:07 embed-certs-706500 kubelet[3922]: E0410 23:09:07.217612    3922 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9mrmz" podUID="a4ccd29a-d27e-4291-ac8c-3135d65f8a2a"
	Apr 10 23:09:19 embed-certs-706500 kubelet[3922]: E0410 23:09:19.217321    3922 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9mrmz" podUID="a4ccd29a-d27e-4291-ac8c-3135d65f8a2a"
	Apr 10 23:09:33 embed-certs-706500 kubelet[3922]: E0410 23:09:33.220614    3922 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9mrmz" podUID="a4ccd29a-d27e-4291-ac8c-3135d65f8a2a"
	Apr 10 23:09:47 embed-certs-706500 kubelet[3922]: E0410 23:09:47.220214    3922 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9mrmz" podUID="a4ccd29a-d27e-4291-ac8c-3135d65f8a2a"
	Apr 10 23:09:53 embed-certs-706500 kubelet[3922]: E0410 23:09:53.324383    3922 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 10 23:09:53 embed-certs-706500 kubelet[3922]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 10 23:09:53 embed-certs-706500 kubelet[3922]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 10 23:09:53 embed-certs-706500 kubelet[3922]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 10 23:09:53 embed-certs-706500 kubelet[3922]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 10 23:10:02 embed-certs-706500 kubelet[3922]: E0410 23:10:02.236940    3922 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Apr 10 23:10:02 embed-certs-706500 kubelet[3922]: E0410 23:10:02.237190    3922 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Apr 10 23:10:02 embed-certs-706500 kubelet[3922]: E0410 23:10:02.237797    3922 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-d5zht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pr
obeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:F
ile,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-9mrmz_kube-system(a4ccd29a-d27e-4291-ac8c-3135d65f8a2a): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Apr 10 23:10:02 embed-certs-706500 kubelet[3922]: E0410 23:10:02.237946    3922 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-9mrmz" podUID="a4ccd29a-d27e-4291-ac8c-3135d65f8a2a"
	Apr 10 23:10:17 embed-certs-706500 kubelet[3922]: E0410 23:10:17.218488    3922 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9mrmz" podUID="a4ccd29a-d27e-4291-ac8c-3135d65f8a2a"
	Apr 10 23:10:31 embed-certs-706500 kubelet[3922]: E0410 23:10:31.222041    3922 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9mrmz" podUID="a4ccd29a-d27e-4291-ac8c-3135d65f8a2a"
	Apr 10 23:10:46 embed-certs-706500 kubelet[3922]: E0410 23:10:46.218458    3922 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9mrmz" podUID="a4ccd29a-d27e-4291-ac8c-3135d65f8a2a"
	Apr 10 23:10:53 embed-certs-706500 kubelet[3922]: E0410 23:10:53.323018    3922 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 10 23:10:53 embed-certs-706500 kubelet[3922]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 10 23:10:53 embed-certs-706500 kubelet[3922]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 10 23:10:53 embed-certs-706500 kubelet[3922]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 10 23:10:53 embed-certs-706500 kubelet[3922]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 10 23:10:58 embed-certs-706500 kubelet[3922]: E0410 23:10:58.217873    3922 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9mrmz" podUID="a4ccd29a-d27e-4291-ac8c-3135d65f8a2a"
	Apr 10 23:11:11 embed-certs-706500 kubelet[3922]: E0410 23:11:11.219852    3922 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9mrmz" podUID="a4ccd29a-d27e-4291-ac8c-3135d65f8a2a"
	
	
	==> storage-provisioner [20860098a53a6c38bbb6118735789916a226b29170ef73a5f59b788e3e789d62] <==
	I0410 22:54:07.984260       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0410 22:54:08.004272       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0410 22:54:08.004432       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0410 22:54:08.019626       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0410 22:54:08.019781       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-706500_ec7f311a-4e38-43a8-9919-a60191d3f5b0!
	I0410 22:54:08.022129       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"74ecf70d-9945-4265-84d4-8d8cdc02049d", APIVersion:"v1", ResourceVersion:"440", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-706500_ec7f311a-4e38-43a8-9919-a60191d3f5b0 became leader
	I0410 22:54:08.120458       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-706500_ec7f311a-4e38-43a8-9919-a60191d3f5b0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-706500 -n embed-certs-706500
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-706500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-9mrmz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-706500 describe pod metrics-server-57f55c9bc5-9mrmz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-706500 describe pod metrics-server-57f55c9bc5-9mrmz: exit status 1 (64.844018ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-9mrmz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-706500 describe pod metrics-server-57f55c9bc5-9mrmz: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (481.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (244.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-646133 -n no-preload-646133
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-10 23:08:14.512027961 +0000 UTC m=+6016.748457241
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-646133 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-646133 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.119µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-646133 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-646133 -n no-preload-646133
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-646133 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-646133 logs -n 25: (1.365286389s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| addons  | enable metrics-server -p no-preload-646133             | no-preload-646133            | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC | 10 Apr 24 22:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-646133                                   | no-preload-646133            | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC | 10 Apr 24 22:41 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.1                      |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:41 UTC | 10 Apr 24 22:41 UTC |
	| start   | -p embed-certs-706500                                  | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:41 UTC | 10 Apr 24 22:42 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-706500            | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:42 UTC | 10 Apr 24 22:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-706500                                  | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-862528        | old-k8s-version-862528       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:42 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-646133                  | no-preload-646133            | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| delete  | -p cert-expiration-464519                              | cert-expiration-464519       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:43 UTC |
	| delete  | -p                                                     | disable-driver-mounts-676292 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:43 UTC |
	|         | disable-driver-mounts-676292                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:44 UTC |
	|         | default-k8s-diff-port-519831                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| start   | -p no-preload-646133                                   | no-preload-646133            | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:55 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.1                      |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-862528                              | old-k8s-version-862528       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-862528             | old-k8s-version-862528       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC | 10 Apr 24 22:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-862528                              | old-k8s-version-862528       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-519831  | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC | 10 Apr 24 22:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC |                     |
	|         | default-k8s-diff-port-519831                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-706500                 | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-706500                                  | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC | 10 Apr 24 22:54 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-519831       | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:46 UTC | 10 Apr 24 22:53 UTC |
	|         | default-k8s-diff-port-519831                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| delete  | -p old-k8s-version-862528                              | old-k8s-version-862528       | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:07 UTC | 10 Apr 24 23:07 UTC |
	| start   | -p newest-cni-497448 --memory=2200 --alsologtostderr   | newest-cni-497448            | jenkins | v1.33.0-beta.0 | 10 Apr 24 23:07 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.1                      |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/10 23:07:59
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0410 23:07:59.471617   64386 out.go:291] Setting OutFile to fd 1 ...
	I0410 23:07:59.471736   64386 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 23:07:59.471751   64386 out.go:304] Setting ErrFile to fd 2...
	I0410 23:07:59.471756   64386 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 23:07:59.471947   64386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 23:07:59.472566   64386 out.go:298] Setting JSON to false
	I0410 23:07:59.473541   64386 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6622,"bootTime":1712783858,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0410 23:07:59.473608   64386 start.go:139] virtualization: kvm guest
	I0410 23:07:59.475924   64386 out.go:177] * [newest-cni-497448] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0410 23:07:59.477538   64386 out.go:177]   - MINIKUBE_LOCATION=18610
	I0410 23:07:59.477525   64386 notify.go:220] Checking for updates...
	I0410 23:07:59.478972   64386 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0410 23:07:59.480292   64386 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 23:07:59.481517   64386 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 23:07:59.482796   64386 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0410 23:07:59.484147   64386 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0410 23:07:59.485933   64386 config.go:182] Loaded profile config "default-k8s-diff-port-519831": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 23:07:59.486042   64386 config.go:182] Loaded profile config "embed-certs-706500": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 23:07:59.486173   64386 config.go:182] Loaded profile config "no-preload-646133": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.1
	I0410 23:07:59.486282   64386 driver.go:392] Setting default libvirt URI to qemu:///system
	I0410 23:07:59.522755   64386 out.go:177] * Using the kvm2 driver based on user configuration
	I0410 23:07:59.524038   64386 start.go:297] selected driver: kvm2
	I0410 23:07:59.524052   64386 start.go:901] validating driver "kvm2" against <nil>
	I0410 23:07:59.524063   64386 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0410 23:07:59.524816   64386 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 23:07:59.524877   64386 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18610-5679/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0410 23:07:59.539965   64386 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0410 23:07:59.540020   64386 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0410 23:07:59.540047   64386 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0410 23:07:59.540327   64386 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0410 23:07:59.540447   64386 cni.go:84] Creating CNI manager for ""
	I0410 23:07:59.540464   64386 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 23:07:59.540478   64386 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0410 23:07:59.540540   64386 start.go:340] cluster config:
	{Name:newest-cni-497448 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.1 ClusterName:newest-cni-497448 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 23:07:59.540649   64386 iso.go:125] acquiring lock: {Name:mk5a268a8a4dbf02629446a098c1de60574dce10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 23:07:59.542806   64386 out.go:177] * Starting "newest-cni-497448" primary control-plane node in "newest-cni-497448" cluster
	I0410 23:07:59.544133   64386 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.1 and runtime crio
	I0410 23:07:59.544169   64386 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I0410 23:07:59.544184   64386 cache.go:56] Caching tarball of preloaded images
	I0410 23:07:59.544284   64386 preload.go:173] Found /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0410 23:07:59.544299   64386 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.1 on crio
	I0410 23:07:59.544390   64386 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/newest-cni-497448/config.json ...
	I0410 23:07:59.544436   64386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/newest-cni-497448/config.json: {Name:mk90dfc4cf9f6a59d888269524914ebed641b31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 23:07:59.544600   64386 start.go:360] acquireMachinesLock for newest-cni-497448: {Name:mkcfcfa8b01b63726303cd37d33f01d452118635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0410 23:07:59.544635   64386 start.go:364] duration metric: took 18.958µs to acquireMachinesLock for "newest-cni-497448"
	I0410 23:07:59.544658   64386 start.go:93] Provisioning new machine with config: &{Name:newest-cni-497448 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0-rc.1 ClusterName:newest-cni-497448 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0410 23:07:59.544744   64386 start.go:125] createHost starting for "" (driver="kvm2")
	I0410 23:07:59.546457   64386 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0410 23:07:59.546584   64386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 23:07:59.546618   64386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 23:07:59.561999   64386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33615
	I0410 23:07:59.562449   64386 main.go:141] libmachine: () Calling .GetVersion
	I0410 23:07:59.563012   64386 main.go:141] libmachine: Using API Version  1
	I0410 23:07:59.563044   64386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 23:07:59.563378   64386 main.go:141] libmachine: () Calling .GetMachineName
	I0410 23:07:59.563546   64386 main.go:141] libmachine: (newest-cni-497448) Calling .GetMachineName
	I0410 23:07:59.563688   64386 main.go:141] libmachine: (newest-cni-497448) Calling .DriverName
	I0410 23:07:59.563840   64386 start.go:159] libmachine.API.Create for "newest-cni-497448" (driver="kvm2")
	I0410 23:07:59.563876   64386 client.go:168] LocalClient.Create starting
	I0410 23:07:59.563911   64386 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem
	I0410 23:07:59.563961   64386 main.go:141] libmachine: Decoding PEM data...
	I0410 23:07:59.563985   64386 main.go:141] libmachine: Parsing certificate...
	I0410 23:07:59.564039   64386 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem
	I0410 23:07:59.564060   64386 main.go:141] libmachine: Decoding PEM data...
	I0410 23:07:59.564069   64386 main.go:141] libmachine: Parsing certificate...
	I0410 23:07:59.564086   64386 main.go:141] libmachine: Running pre-create checks...
	I0410 23:07:59.564094   64386 main.go:141] libmachine: (newest-cni-497448) Calling .PreCreateCheck
	I0410 23:07:59.564445   64386 main.go:141] libmachine: (newest-cni-497448) Calling .GetConfigRaw
	I0410 23:07:59.564802   64386 main.go:141] libmachine: Creating machine...
	I0410 23:07:59.564816   64386 main.go:141] libmachine: (newest-cni-497448) Calling .Create
	I0410 23:07:59.564919   64386 main.go:141] libmachine: (newest-cni-497448) Creating KVM machine...
	I0410 23:07:59.566153   64386 main.go:141] libmachine: (newest-cni-497448) DBG | found existing default KVM network
	I0410 23:07:59.567361   64386 main.go:141] libmachine: (newest-cni-497448) DBG | I0410 23:07:59.567192   64409 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:57:4f:87} reservation:<nil>}
	I0410 23:07:59.568101   64386 main.go:141] libmachine: (newest-cni-497448) DBG | I0410 23:07:59.568012   64409 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:03:bb:bb} reservation:<nil>}
	I0410 23:07:59.569168   64386 main.go:141] libmachine: (newest-cni-497448) DBG | I0410 23:07:59.569095   64409 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002e68a0}
	I0410 23:07:59.569199   64386 main.go:141] libmachine: (newest-cni-497448) DBG | created network xml: 
	I0410 23:07:59.569213   64386 main.go:141] libmachine: (newest-cni-497448) DBG | <network>
	I0410 23:07:59.569225   64386 main.go:141] libmachine: (newest-cni-497448) DBG |   <name>mk-newest-cni-497448</name>
	I0410 23:07:59.569230   64386 main.go:141] libmachine: (newest-cni-497448) DBG |   <dns enable='no'/>
	I0410 23:07:59.569241   64386 main.go:141] libmachine: (newest-cni-497448) DBG |   
	I0410 23:07:59.569249   64386 main.go:141] libmachine: (newest-cni-497448) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0410 23:07:59.569273   64386 main.go:141] libmachine: (newest-cni-497448) DBG |     <dhcp>
	I0410 23:07:59.569295   64386 main.go:141] libmachine: (newest-cni-497448) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0410 23:07:59.569343   64386 main.go:141] libmachine: (newest-cni-497448) DBG |     </dhcp>
	I0410 23:07:59.569350   64386 main.go:141] libmachine: (newest-cni-497448) DBG |   </ip>
	I0410 23:07:59.569361   64386 main.go:141] libmachine: (newest-cni-497448) DBG |   
	I0410 23:07:59.569372   64386 main.go:141] libmachine: (newest-cni-497448) DBG | </network>
	I0410 23:07:59.569382   64386 main.go:141] libmachine: (newest-cni-497448) DBG | 
	I0410 23:07:59.574721   64386 main.go:141] libmachine: (newest-cni-497448) DBG | trying to create private KVM network mk-newest-cni-497448 192.168.61.0/24...
	I0410 23:07:59.645207   64386 main.go:141] libmachine: (newest-cni-497448) DBG | private KVM network mk-newest-cni-497448 192.168.61.0/24 created
	I0410 23:07:59.645245   64386 main.go:141] libmachine: (newest-cni-497448) Setting up store path in /home/jenkins/minikube-integration/18610-5679/.minikube/machines/newest-cni-497448 ...
	I0410 23:07:59.645260   64386 main.go:141] libmachine: (newest-cni-497448) DBG | I0410 23:07:59.645210   64409 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 23:07:59.645321   64386 main.go:141] libmachine: (newest-cni-497448) Building disk image from file:///home/jenkins/minikube-integration/18610-5679/.minikube/cache/iso/amd64/minikube-v1.33.0-1712743565-18610-amd64.iso
	I0410 23:07:59.645364   64386 main.go:141] libmachine: (newest-cni-497448) Downloading /home/jenkins/minikube-integration/18610-5679/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18610-5679/.minikube/cache/iso/amd64/minikube-v1.33.0-1712743565-18610-amd64.iso...
	I0410 23:07:59.877371   64386 main.go:141] libmachine: (newest-cni-497448) DBG | I0410 23:07:59.877252   64409 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/newest-cni-497448/id_rsa...
	I0410 23:08:00.008033   64386 main.go:141] libmachine: (newest-cni-497448) DBG | I0410 23:08:00.007901   64409 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/newest-cni-497448/newest-cni-497448.rawdisk...
	I0410 23:08:00.008061   64386 main.go:141] libmachine: (newest-cni-497448) DBG | Writing magic tar header
	I0410 23:08:00.008075   64386 main.go:141] libmachine: (newest-cni-497448) DBG | Writing SSH key tar header
	I0410 23:08:00.008088   64386 main.go:141] libmachine: (newest-cni-497448) DBG | I0410 23:08:00.008009   64409 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18610-5679/.minikube/machines/newest-cni-497448 ...
	I0410 23:08:00.008107   64386 main.go:141] libmachine: (newest-cni-497448) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/newest-cni-497448
	I0410 23:08:00.008125   64386 main.go:141] libmachine: (newest-cni-497448) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18610-5679/.minikube/machines
	I0410 23:08:00.008150   64386 main.go:141] libmachine: (newest-cni-497448) Setting executable bit set on /home/jenkins/minikube-integration/18610-5679/.minikube/machines/newest-cni-497448 (perms=drwx------)
	I0410 23:08:00.008160   64386 main.go:141] libmachine: (newest-cni-497448) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 23:08:00.008178   64386 main.go:141] libmachine: (newest-cni-497448) Setting executable bit set on /home/jenkins/minikube-integration/18610-5679/.minikube/machines (perms=drwxr-xr-x)
	I0410 23:08:00.008195   64386 main.go:141] libmachine: (newest-cni-497448) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18610-5679
	I0410 23:08:00.008209   64386 main.go:141] libmachine: (newest-cni-497448) Setting executable bit set on /home/jenkins/minikube-integration/18610-5679/.minikube (perms=drwxr-xr-x)
	I0410 23:08:00.008224   64386 main.go:141] libmachine: (newest-cni-497448) Setting executable bit set on /home/jenkins/minikube-integration/18610-5679 (perms=drwxrwxr-x)
	I0410 23:08:00.008238   64386 main.go:141] libmachine: (newest-cni-497448) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0410 23:08:00.008257   64386 main.go:141] libmachine: (newest-cni-497448) DBG | Checking permissions on dir: /home/jenkins
	I0410 23:08:00.008269   64386 main.go:141] libmachine: (newest-cni-497448) DBG | Checking permissions on dir: /home
	I0410 23:08:00.008286   64386 main.go:141] libmachine: (newest-cni-497448) DBG | Skipping /home - not owner
	I0410 23:08:00.008303   64386 main.go:141] libmachine: (newest-cni-497448) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0410 23:08:00.008317   64386 main.go:141] libmachine: (newest-cni-497448) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0410 23:08:00.008328   64386 main.go:141] libmachine: (newest-cni-497448) Creating domain...
	I0410 23:08:00.009575   64386 main.go:141] libmachine: (newest-cni-497448) define libvirt domain using xml: 
	I0410 23:08:00.009602   64386 main.go:141] libmachine: (newest-cni-497448) <domain type='kvm'>
	I0410 23:08:00.009624   64386 main.go:141] libmachine: (newest-cni-497448)   <name>newest-cni-497448</name>
	I0410 23:08:00.009640   64386 main.go:141] libmachine: (newest-cni-497448)   <memory unit='MiB'>2200</memory>
	I0410 23:08:00.009674   64386 main.go:141] libmachine: (newest-cni-497448)   <vcpu>2</vcpu>
	I0410 23:08:00.009710   64386 main.go:141] libmachine: (newest-cni-497448)   <features>
	I0410 23:08:00.009723   64386 main.go:141] libmachine: (newest-cni-497448)     <acpi/>
	I0410 23:08:00.009733   64386 main.go:141] libmachine: (newest-cni-497448)     <apic/>
	I0410 23:08:00.009747   64386 main.go:141] libmachine: (newest-cni-497448)     <pae/>
	I0410 23:08:00.009763   64386 main.go:141] libmachine: (newest-cni-497448)     
	I0410 23:08:00.009776   64386 main.go:141] libmachine: (newest-cni-497448)   </features>
	I0410 23:08:00.009786   64386 main.go:141] libmachine: (newest-cni-497448)   <cpu mode='host-passthrough'>
	I0410 23:08:00.009810   64386 main.go:141] libmachine: (newest-cni-497448)   
	I0410 23:08:00.009835   64386 main.go:141] libmachine: (newest-cni-497448)   </cpu>
	I0410 23:08:00.009864   64386 main.go:141] libmachine: (newest-cni-497448)   <os>
	I0410 23:08:00.009887   64386 main.go:141] libmachine: (newest-cni-497448)     <type>hvm</type>
	I0410 23:08:00.009912   64386 main.go:141] libmachine: (newest-cni-497448)     <boot dev='cdrom'/>
	I0410 23:08:00.009924   64386 main.go:141] libmachine: (newest-cni-497448)     <boot dev='hd'/>
	I0410 23:08:00.009933   64386 main.go:141] libmachine: (newest-cni-497448)     <bootmenu enable='no'/>
	I0410 23:08:00.009943   64386 main.go:141] libmachine: (newest-cni-497448)   </os>
	I0410 23:08:00.009951   64386 main.go:141] libmachine: (newest-cni-497448)   <devices>
	I0410 23:08:00.009971   64386 main.go:141] libmachine: (newest-cni-497448)     <disk type='file' device='cdrom'>
	I0410 23:08:00.009995   64386 main.go:141] libmachine: (newest-cni-497448)       <source file='/home/jenkins/minikube-integration/18610-5679/.minikube/machines/newest-cni-497448/boot2docker.iso'/>
	I0410 23:08:00.010017   64386 main.go:141] libmachine: (newest-cni-497448)       <target dev='hdc' bus='scsi'/>
	I0410 23:08:00.010029   64386 main.go:141] libmachine: (newest-cni-497448)       <readonly/>
	I0410 23:08:00.010036   64386 main.go:141] libmachine: (newest-cni-497448)     </disk>
	I0410 23:08:00.010048   64386 main.go:141] libmachine: (newest-cni-497448)     <disk type='file' device='disk'>
	I0410 23:08:00.010064   64386 main.go:141] libmachine: (newest-cni-497448)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0410 23:08:00.010085   64386 main.go:141] libmachine: (newest-cni-497448)       <source file='/home/jenkins/minikube-integration/18610-5679/.minikube/machines/newest-cni-497448/newest-cni-497448.rawdisk'/>
	I0410 23:08:00.010100   64386 main.go:141] libmachine: (newest-cni-497448)       <target dev='hda' bus='virtio'/>
	I0410 23:08:00.010114   64386 main.go:141] libmachine: (newest-cni-497448)     </disk>
	I0410 23:08:00.010126   64386 main.go:141] libmachine: (newest-cni-497448)     <interface type='network'>
	I0410 23:08:00.010138   64386 main.go:141] libmachine: (newest-cni-497448)       <source network='mk-newest-cni-497448'/>
	I0410 23:08:00.010163   64386 main.go:141] libmachine: (newest-cni-497448)       <model type='virtio'/>
	I0410 23:08:00.010182   64386 main.go:141] libmachine: (newest-cni-497448)     </interface>
	I0410 23:08:00.010192   64386 main.go:141] libmachine: (newest-cni-497448)     <interface type='network'>
	I0410 23:08:00.010200   64386 main.go:141] libmachine: (newest-cni-497448)       <source network='default'/>
	I0410 23:08:00.010212   64386 main.go:141] libmachine: (newest-cni-497448)       <model type='virtio'/>
	I0410 23:08:00.010219   64386 main.go:141] libmachine: (newest-cni-497448)     </interface>
	I0410 23:08:00.010232   64386 main.go:141] libmachine: (newest-cni-497448)     <serial type='pty'>
	I0410 23:08:00.010242   64386 main.go:141] libmachine: (newest-cni-497448)       <target port='0'/>
	I0410 23:08:00.010251   64386 main.go:141] libmachine: (newest-cni-497448)     </serial>
	I0410 23:08:00.010261   64386 main.go:141] libmachine: (newest-cni-497448)     <console type='pty'>
	I0410 23:08:00.010273   64386 main.go:141] libmachine: (newest-cni-497448)       <target type='serial' port='0'/>
	I0410 23:08:00.010283   64386 main.go:141] libmachine: (newest-cni-497448)     </console>
	I0410 23:08:00.010291   64386 main.go:141] libmachine: (newest-cni-497448)     <rng model='virtio'>
	I0410 23:08:00.010319   64386 main.go:141] libmachine: (newest-cni-497448)       <backend model='random'>/dev/random</backend>
	I0410 23:08:00.010350   64386 main.go:141] libmachine: (newest-cni-497448)     </rng>
	I0410 23:08:00.010363   64386 main.go:141] libmachine: (newest-cni-497448)     
	I0410 23:08:00.010375   64386 main.go:141] libmachine: (newest-cni-497448)     
	I0410 23:08:00.010387   64386 main.go:141] libmachine: (newest-cni-497448)   </devices>
	I0410 23:08:00.010395   64386 main.go:141] libmachine: (newest-cni-497448) </domain>
	I0410 23:08:00.010406   64386 main.go:141] libmachine: (newest-cni-497448) 
	I0410 23:08:00.014764   64386 main.go:141] libmachine: (newest-cni-497448) DBG | domain newest-cni-497448 has defined MAC address 52:54:00:ad:51:cb in network default
	I0410 23:08:00.015466   64386 main.go:141] libmachine: (newest-cni-497448) Ensuring networks are active...
	I0410 23:08:00.015499   64386 main.go:141] libmachine: (newest-cni-497448) DBG | domain newest-cni-497448 has defined MAC address 52:54:00:9b:43:24 in network mk-newest-cni-497448
	I0410 23:08:00.016093   64386 main.go:141] libmachine: (newest-cni-497448) Ensuring network default is active
	I0410 23:08:00.016361   64386 main.go:141] libmachine: (newest-cni-497448) Ensuring network mk-newest-cni-497448 is active
	I0410 23:08:00.017096   64386 main.go:141] libmachine: (newest-cni-497448) Getting domain xml...
	I0410 23:08:00.017943   64386 main.go:141] libmachine: (newest-cni-497448) Creating domain...
	I0410 23:08:01.289386   64386 main.go:141] libmachine: (newest-cni-497448) Waiting to get IP...
	I0410 23:08:01.290233   64386 main.go:141] libmachine: (newest-cni-497448) DBG | domain newest-cni-497448 has defined MAC address 52:54:00:9b:43:24 in network mk-newest-cni-497448
	I0410 23:08:01.290717   64386 main.go:141] libmachine: (newest-cni-497448) DBG | unable to find current IP address of domain newest-cni-497448 in network mk-newest-cni-497448
	I0410 23:08:01.290755   64386 main.go:141] libmachine: (newest-cni-497448) DBG | I0410 23:08:01.290699   64409 retry.go:31] will retry after 236.805558ms: waiting for machine to come up
	I0410 23:08:01.529051   64386 main.go:141] libmachine: (newest-cni-497448) DBG | domain newest-cni-497448 has defined MAC address 52:54:00:9b:43:24 in network mk-newest-cni-497448
	I0410 23:08:01.529544   64386 main.go:141] libmachine: (newest-cni-497448) DBG | unable to find current IP address of domain newest-cni-497448 in network mk-newest-cni-497448
	I0410 23:08:01.529566   64386 main.go:141] libmachine: (newest-cni-497448) DBG | I0410 23:08:01.529494   64409 retry.go:31] will retry after 288.144771ms: waiting for machine to come up
	I0410 23:08:01.819084   64386 main.go:141] libmachine: (newest-cni-497448) DBG | domain newest-cni-497448 has defined MAC address 52:54:00:9b:43:24 in network mk-newest-cni-497448
	I0410 23:08:01.819618   64386 main.go:141] libmachine: (newest-cni-497448) DBG | unable to find current IP address of domain newest-cni-497448 in network mk-newest-cni-497448
	I0410 23:08:01.819646   64386 main.go:141] libmachine: (newest-cni-497448) DBG | I0410 23:08:01.819578   64409 retry.go:31] will retry after 312.244752ms: waiting for machine to come up
	I0410 23:08:02.133070   64386 main.go:141] libmachine: (newest-cni-497448) DBG | domain newest-cni-497448 has defined MAC address 52:54:00:9b:43:24 in network mk-newest-cni-497448
	I0410 23:08:02.133623   64386 main.go:141] libmachine: (newest-cni-497448) DBG | unable to find current IP address of domain newest-cni-497448 in network mk-newest-cni-497448
	I0410 23:08:02.133651   64386 main.go:141] libmachine: (newest-cni-497448) DBG | I0410 23:08:02.133580   64409 retry.go:31] will retry after 476.262107ms: waiting for machine to come up
	I0410 23:08:02.611275   64386 main.go:141] libmachine: (newest-cni-497448) DBG | domain newest-cni-497448 has defined MAC address 52:54:00:9b:43:24 in network mk-newest-cni-497448
	I0410 23:08:02.611926   64386 main.go:141] libmachine: (newest-cni-497448) DBG | unable to find current IP address of domain newest-cni-497448 in network mk-newest-cni-497448
	I0410 23:08:02.611989   64386 main.go:141] libmachine: (newest-cni-497448) DBG | I0410 23:08:02.611848   64409 retry.go:31] will retry after 654.400707ms: waiting for machine to come up
	I0410 23:08:03.268287   64386 main.go:141] libmachine: (newest-cni-497448) DBG | domain newest-cni-497448 has defined MAC address 52:54:00:9b:43:24 in network mk-newest-cni-497448
	I0410 23:08:03.268728   64386 main.go:141] libmachine: (newest-cni-497448) DBG | unable to find current IP address of domain newest-cni-497448 in network mk-newest-cni-497448
	I0410 23:08:03.268754   64386 main.go:141] libmachine: (newest-cni-497448) DBG | I0410 23:08:03.268694   64409 retry.go:31] will retry after 850.245148ms: waiting for machine to come up
	I0410 23:08:04.120748   64386 main.go:141] libmachine: (newest-cni-497448) DBG | domain newest-cni-497448 has defined MAC address 52:54:00:9b:43:24 in network mk-newest-cni-497448
	I0410 23:08:04.121190   64386 main.go:141] libmachine: (newest-cni-497448) DBG | unable to find current IP address of domain newest-cni-497448 in network mk-newest-cni-497448
	I0410 23:08:04.121249   64386 main.go:141] libmachine: (newest-cni-497448) DBG | I0410 23:08:04.121159   64409 retry.go:31] will retry after 1.037385569s: waiting for machine to come up
	I0410 23:08:05.159905   64386 main.go:141] libmachine: (newest-cni-497448) DBG | domain newest-cni-497448 has defined MAC address 52:54:00:9b:43:24 in network mk-newest-cni-497448
	I0410 23:08:05.160439   64386 main.go:141] libmachine: (newest-cni-497448) DBG | unable to find current IP address of domain newest-cni-497448 in network mk-newest-cni-497448
	I0410 23:08:05.160474   64386 main.go:141] libmachine: (newest-cni-497448) DBG | I0410 23:08:05.160368   64409 retry.go:31] will retry after 934.002747ms: waiting for machine to come up
	I0410 23:08:06.096187   64386 main.go:141] libmachine: (newest-cni-497448) DBG | domain newest-cni-497448 has defined MAC address 52:54:00:9b:43:24 in network mk-newest-cni-497448
	I0410 23:08:06.096730   64386 main.go:141] libmachine: (newest-cni-497448) DBG | unable to find current IP address of domain newest-cni-497448 in network mk-newest-cni-497448
	I0410 23:08:06.096773   64386 main.go:141] libmachine: (newest-cni-497448) DBG | I0410 23:08:06.096693   64409 retry.go:31] will retry after 1.782497167s: waiting for machine to come up
	I0410 23:08:07.881632   64386 main.go:141] libmachine: (newest-cni-497448) DBG | domain newest-cni-497448 has defined MAC address 52:54:00:9b:43:24 in network mk-newest-cni-497448
	I0410 23:08:07.882042   64386 main.go:141] libmachine: (newest-cni-497448) DBG | unable to find current IP address of domain newest-cni-497448 in network mk-newest-cni-497448
	I0410 23:08:07.882074   64386 main.go:141] libmachine: (newest-cni-497448) DBG | I0410 23:08:07.881991   64409 retry.go:31] will retry after 1.93761887s: waiting for machine to come up
	I0410 23:08:09.820762   64386 main.go:141] libmachine: (newest-cni-497448) DBG | domain newest-cni-497448 has defined MAC address 52:54:00:9b:43:24 in network mk-newest-cni-497448
	I0410 23:08:09.821290   64386 main.go:141] libmachine: (newest-cni-497448) DBG | unable to find current IP address of domain newest-cni-497448 in network mk-newest-cni-497448
	I0410 23:08:09.821323   64386 main.go:141] libmachine: (newest-cni-497448) DBG | I0410 23:08:09.821243   64409 retry.go:31] will retry after 2.09333418s: waiting for machine to come up
	I0410 23:08:11.916043   64386 main.go:141] libmachine: (newest-cni-497448) DBG | domain newest-cni-497448 has defined MAC address 52:54:00:9b:43:24 in network mk-newest-cni-497448
	I0410 23:08:11.916682   64386 main.go:141] libmachine: (newest-cni-497448) DBG | unable to find current IP address of domain newest-cni-497448 in network mk-newest-cni-497448
	I0410 23:08:11.916726   64386 main.go:141] libmachine: (newest-cni-497448) DBG | I0410 23:08:11.916648   64409 retry.go:31] will retry after 3.028918462s: waiting for machine to come up
	
	
	==> CRI-O <==
	Apr 10 23:08:15 no-preload-646133 crio[725]: time="2024-04-10 23:08:15.230905548Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:312e184bed65496636b4cf4bd275dde4ae1e62b7853d9b3ac120d7979d80980c,Metadata:&PodSandboxMetadata{Name:kube-proxy-24vhc,Uid:ca175e85-76f2-47d2-91a5-0248194a88e8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712789704317225096,Labels:map[string]string{controller-revision-hash: 7b4cd945b6,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-24vhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca175e85-76f2-47d2-91a5-0248194a88e8,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-10T22:55:02.508138350Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:98aa9bcbe6e4737a4357fc234ee3619f5386c9435af0024c874ff0a61830d06d,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-v599p,Uid:f30c2827-5930-41d4-82b7-edfb839b3a
74,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712789704137465810,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-v599p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f30c2827-5930-41d4-82b7-edfb839b3a74,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-10T22:55:02.924297110Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5c61163b7108bf85d4537d8c77f569e0131b69953317227f9772345f55bbc2c7,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:3232daa9-da88-4152-97c8-e86b3d50b0b8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712789704122805714,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3232daa9-da88-4152-97c8-e86b3d
50b0b8,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-10T22:55:03.511112695Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:81ebe63b1ab99894de7fb8864353e84000bb887aa36b4ba9cf25762032031a9a,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-bj59f,Uid:4aace435-90b
e-456a-8a85-dbee0026212c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712789704083606728,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-bj59f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4aace435-90be-456a-8a85-dbee0026212c,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-10T22:55:03.764547869Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:41c42efa2a202eb5275bd92b43d655e2d97ce89294a073eb67f34163a410bf1c,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-jm2zw,Uid:9d8b995c-717e-43a5-a963-f07a4f7a76a8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712789704078393155,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm2zw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d8b995c-717e-43a5-a963-f07a4f7a76a8,k8s-app: kube-dns,pod-templat
e-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-10T22:55:02.868755308Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:25b1516115f454d2e578c2f96caaaf77dfdf11228328346a5e7cd260067cd299,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-646133,Uid:77206909f47e74b9e84d7a2b5eedaafc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712789683407444294,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77206909f47e74b9e84d7a2b5eedaafc,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 77206909f47e74b9e84d7a2b5eedaafc,kubernetes.io/config.seen: 2024-04-10T22:54:42.957773890Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3c389c5244238f1502f663a622edcfcdb39c842cc4f2ae8928f4e315e184c244,Metadata:&PodSandboxMeta
data{Name:kube-apiserver-no-preload-646133,Uid:82910e12df56feafb80402f4702155af,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1712789683405077281,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82910e12df56feafb80402f4702155af,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.17:8443,kubernetes.io/config.hash: 82910e12df56feafb80402f4702155af,kubernetes.io/config.seen: 2024-04-10T22:54:42.957772202Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:23d5696225b73cd34b393dcbb17c06fffad8e530ba7eb26fe7c01152a53e47d2,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-646133,Uid:e9048145b75d9f795053d905e2e8df6b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712789683404749182,Labels:map[string]string{component: etcd,io.kubernetes.
container.name: POD,io.kubernetes.pod.name: etcd-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9048145b75d9f795053d905e2e8df6b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.17:2379,kubernetes.io/config.hash: e9048145b75d9f795053d905e2e8df6b,kubernetes.io/config.seen: 2024-04-10T22:54:42.957768003Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:552181e571efbb16150f1f7d7ef33924726c87eebc816065701e2533cfc0e011,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-646133,Uid:8b23d99268ec85dfc255b89a65a2b7a6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712789683396914992,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b23d99268ec85dfc255b89a65a2b7a6,tier: control-plane,},Annotations:map[string]string{
kubernetes.io/config.hash: 8b23d99268ec85dfc255b89a65a2b7a6,kubernetes.io/config.seen: 2024-04-10T22:54:42.957774770Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=21adc7a0-c5c3-4206-be19-3d0896858ef1 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 10 23:08:15 no-preload-646133 crio[725]: time="2024-04-10 23:08:15.231993065Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be054608-2f7b-4801-b603-dcbb18cc0f38 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:08:15 no-preload-646133 crio[725]: time="2024-04-10 23:08:15.232092567Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be054608-2f7b-4801-b603-dcbb18cc0f38 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:08:15 no-preload-646133 crio[725]: time="2024-04-10 23:08:15.235199021Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec8d0d02104473d1b74d3f7cdd550cd9c1329263c9ae211f5d79d32a15895ae0,PodSandboxId:98aa9bcbe6e4737a4357fc234ee3619f5386c9435af0024c874ff0a61830d06d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789704759074785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v599p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f30c2827-5930-41d4-82b7-edfb839b3a74,},Annotations:map[string]string{io.kubernetes.container.hash: fdf46a31,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6e28085ff85adfb86327f333e1bfd9473635076de9a2742d0d7db843b0332df,PodSandboxId:41c42efa2a202eb5275bd92b43d655e2d97ce89294a073eb67f34163a410bf1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789704770245179,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm2zw,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9d8b995c-717e-43a5-a963-f07a4f7a76a8,},Annotations:map[string]string{io.kubernetes.container.hash: 20d22dca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a76d2c57e073bd9bc6ada95b65d50ec62897e37c5bceb09a83810b1013edc46,PodSandboxId:312e184bed65496636b4cf4bd275dde4ae1e62b7853d9b3ac120d7979d80980c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:69f89e5e13a4119f8752cd7f0fb30e3db4ce480a5e08580a3bf72597464bd061,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:69f89e5e13a4119f8752cd7f0fb30e3db4ce480a5e08580a3bf72597464bd061,State:CONTAINER_RUNNIN
G,CreatedAt:1712789704681414763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-24vhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca175e85-76f2-47d2-91a5-0248194a88e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3b62c1d4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6174309d2279bfc9949db4340f399294d0a6a8247adb8e4de618f5facb06854,PodSandboxId:5c61163b7108bf85d4537d8c77f569e0131b69953317227f9772345f55bbc2c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171278970433
8297649,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3232daa9-da88-4152-97c8-e86b3d50b0b8,},Annotations:map[string]string{io.kubernetes.container.hash: cbcd7332,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6a95b9d5058af0971fbe9adf827d0108e8ff6b55f972a8b472a87281cd5c8b3,PodSandboxId:23d5696225b73cd34b393dcbb17c06fffad8e530ba7eb26fe7c01152a53e47d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712789683747194956,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9048145b75d9f795053d905e2e8df6b,},Annotations:map[string]string{io.kubernetes.container.hash: 82d5fd8c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f63f2ae0fa5319f246ad59d82927c2ad707f20092e6b32af71a1ef8a06307d39,PodSandboxId:3c389c5244238f1502f663a622edcfcdb39c842cc4f2ae8928f4e315e184c244,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,State:CONTAINER_RUNNING,CreatedAt:1712789683719979983,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82910e12df56feafb80402f4702155af,},Annotations:map[string]string{io.kubernetes.container.hash: 16cce62d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fadf0431cad5782a96391f4d14bd31409f9f925c9e8eedcd6ab3b49a064480,PodSandboxId:25b1516115f454d2e578c2f96caaaf77dfdf11228328346a5e7cd260067cd299,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090,State:CONTAINER_RUNNING,CreatedAt:1712789683641924803,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77206909f47e74b9e84d7a2b5eedaafc,},Annotations:map[string]string{io.kubernetes.container.hash: 558a0b01,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3638946755f9e70fdb9934d30a1922abe47fed13817278575a833f856edca95,PodSandboxId:552181e571efbb16150f1f7d7ef33924726c87eebc816065701e2533cfc0e011,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b,State:CONTAINER_RUNNING,CreatedAt:1712789683649144649,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b23d99268ec85dfc255b89a65a2b7a6,},Annotations:map[string]string{io.kubernetes.container.hash: 9c2c6ac3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be054608-2f7b-4801-b603-dcbb18cc0f38 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:08:15 no-preload-646133 crio[725]: time="2024-04-10 23:08:15.268336687Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=016f540d-14a1-4798-a778-31ace5471939 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:08:15 no-preload-646133 crio[725]: time="2024-04-10 23:08:15.268436321Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=016f540d-14a1-4798-a778-31ace5471939 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:08:15 no-preload-646133 crio[725]: time="2024-04-10 23:08:15.270142169Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b519f069-a329-401d-9ea8-63b305cd20d0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:08:15 no-preload-646133 crio[725]: time="2024-04-10 23:08:15.270725224Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712790495270693137,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97389,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b519f069-a329-401d-9ea8-63b305cd20d0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:08:15 no-preload-646133 crio[725]: time="2024-04-10 23:08:15.271210222Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b835d09c-dcb2-4ae8-843a-d30604504a0e name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:08:15 no-preload-646133 crio[725]: time="2024-04-10 23:08:15.271264412Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b835d09c-dcb2-4ae8-843a-d30604504a0e name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:08:15 no-preload-646133 crio[725]: time="2024-04-10 23:08:15.271475950Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec8d0d02104473d1b74d3f7cdd550cd9c1329263c9ae211f5d79d32a15895ae0,PodSandboxId:98aa9bcbe6e4737a4357fc234ee3619f5386c9435af0024c874ff0a61830d06d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789704759074785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v599p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f30c2827-5930-41d4-82b7-edfb839b3a74,},Annotations:map[string]string{io.kubernetes.container.hash: fdf46a31,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6e28085ff85adfb86327f333e1bfd9473635076de9a2742d0d7db843b0332df,PodSandboxId:41c42efa2a202eb5275bd92b43d655e2d97ce89294a073eb67f34163a410bf1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789704770245179,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm2zw,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9d8b995c-717e-43a5-a963-f07a4f7a76a8,},Annotations:map[string]string{io.kubernetes.container.hash: 20d22dca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a76d2c57e073bd9bc6ada95b65d50ec62897e37c5bceb09a83810b1013edc46,PodSandboxId:312e184bed65496636b4cf4bd275dde4ae1e62b7853d9b3ac120d7979d80980c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:69f89e5e13a4119f8752cd7f0fb30e3db4ce480a5e08580a3bf72597464bd061,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:69f89e5e13a4119f8752cd7f0fb30e3db4ce480a5e08580a3bf72597464bd061,State:CONTAINER_RUNNIN
G,CreatedAt:1712789704681414763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-24vhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca175e85-76f2-47d2-91a5-0248194a88e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3b62c1d4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6174309d2279bfc9949db4340f399294d0a6a8247adb8e4de618f5facb06854,PodSandboxId:5c61163b7108bf85d4537d8c77f569e0131b69953317227f9772345f55bbc2c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171278970433
8297649,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3232daa9-da88-4152-97c8-e86b3d50b0b8,},Annotations:map[string]string{io.kubernetes.container.hash: cbcd7332,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6a95b9d5058af0971fbe9adf827d0108e8ff6b55f972a8b472a87281cd5c8b3,PodSandboxId:23d5696225b73cd34b393dcbb17c06fffad8e530ba7eb26fe7c01152a53e47d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712789683747194956,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9048145b75d9f795053d905e2e8df6b,},Annotations:map[string]string{io.kubernetes.container.hash: 82d5fd8c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f63f2ae0fa5319f246ad59d82927c2ad707f20092e6b32af71a1ef8a06307d39,PodSandboxId:3c389c5244238f1502f663a622edcfcdb39c842cc4f2ae8928f4e315e184c244,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,State:CONTAINER_RUNNING,CreatedAt:1712789683719979983,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82910e12df56feafb80402f4702155af,},Annotations:map[string]string{io.kubernetes.container.hash: 16cce62d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fadf0431cad5782a96391f4d14bd31409f9f925c9e8eedcd6ab3b49a064480,PodSandboxId:25b1516115f454d2e578c2f96caaaf77dfdf11228328346a5e7cd260067cd299,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090,State:CONTAINER_RUNNING,CreatedAt:1712789683641924803,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77206909f47e74b9e84d7a2b5eedaafc,},Annotations:map[string]string{io.kubernetes.container.hash: 558a0b01,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3638946755f9e70fdb9934d30a1922abe47fed13817278575a833f856edca95,PodSandboxId:552181e571efbb16150f1f7d7ef33924726c87eebc816065701e2533cfc0e011,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b,State:CONTAINER_RUNNING,CreatedAt:1712789683649144649,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b23d99268ec85dfc255b89a65a2b7a6,},Annotations:map[string]string{io.kubernetes.container.hash: 9c2c6ac3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d763c490e6c5df0e625305c075d661241fa8d19dcca80f810ba34f1696f93e,PodSandboxId:3d26f66a41926c5e65c921e6568a934c5685981497e4ea29c9426bc6a5c737ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,State:CONTAINER_EXITED,CreatedAt:1712789389129474602,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82910e12df56feafb80402f4702155af,},Annotations:map[string]string{io.kubernetes.container.hash: 16cce62d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b835d09c-dcb2-4ae8-843a-d30604504a0e name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:08:15 no-preload-646133 crio[725]: time="2024-04-10 23:08:15.314745728Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=57e29ea1-e405-40ed-848d-8ed62591ec6c name=/runtime.v1.RuntimeService/Version
	Apr 10 23:08:15 no-preload-646133 crio[725]: time="2024-04-10 23:08:15.314866435Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=57e29ea1-e405-40ed-848d-8ed62591ec6c name=/runtime.v1.RuntimeService/Version
	Apr 10 23:08:15 no-preload-646133 crio[725]: time="2024-04-10 23:08:15.316999568Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7eda8cb5-e139-4d0a-91dd-0944f36fef13 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:08:15 no-preload-646133 crio[725]: time="2024-04-10 23:08:15.317457780Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712790495317430594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97389,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7eda8cb5-e139-4d0a-91dd-0944f36fef13 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:08:15 no-preload-646133 crio[725]: time="2024-04-10 23:08:15.318195876Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b047e04-d3a3-4ee9-a561-33930212bdf0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:08:15 no-preload-646133 crio[725]: time="2024-04-10 23:08:15.318250340Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b047e04-d3a3-4ee9-a561-33930212bdf0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:08:15 no-preload-646133 crio[725]: time="2024-04-10 23:08:15.318455288Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec8d0d02104473d1b74d3f7cdd550cd9c1329263c9ae211f5d79d32a15895ae0,PodSandboxId:98aa9bcbe6e4737a4357fc234ee3619f5386c9435af0024c874ff0a61830d06d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789704759074785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v599p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f30c2827-5930-41d4-82b7-edfb839b3a74,},Annotations:map[string]string{io.kubernetes.container.hash: fdf46a31,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6e28085ff85adfb86327f333e1bfd9473635076de9a2742d0d7db843b0332df,PodSandboxId:41c42efa2a202eb5275bd92b43d655e2d97ce89294a073eb67f34163a410bf1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789704770245179,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm2zw,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9d8b995c-717e-43a5-a963-f07a4f7a76a8,},Annotations:map[string]string{io.kubernetes.container.hash: 20d22dca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a76d2c57e073bd9bc6ada95b65d50ec62897e37c5bceb09a83810b1013edc46,PodSandboxId:312e184bed65496636b4cf4bd275dde4ae1e62b7853d9b3ac120d7979d80980c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:69f89e5e13a4119f8752cd7f0fb30e3db4ce480a5e08580a3bf72597464bd061,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:69f89e5e13a4119f8752cd7f0fb30e3db4ce480a5e08580a3bf72597464bd061,State:CONTAINER_RUNNIN
G,CreatedAt:1712789704681414763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-24vhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca175e85-76f2-47d2-91a5-0248194a88e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3b62c1d4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6174309d2279bfc9949db4340f399294d0a6a8247adb8e4de618f5facb06854,PodSandboxId:5c61163b7108bf85d4537d8c77f569e0131b69953317227f9772345f55bbc2c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171278970433
8297649,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3232daa9-da88-4152-97c8-e86b3d50b0b8,},Annotations:map[string]string{io.kubernetes.container.hash: cbcd7332,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6a95b9d5058af0971fbe9adf827d0108e8ff6b55f972a8b472a87281cd5c8b3,PodSandboxId:23d5696225b73cd34b393dcbb17c06fffad8e530ba7eb26fe7c01152a53e47d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712789683747194956,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9048145b75d9f795053d905e2e8df6b,},Annotations:map[string]string{io.kubernetes.container.hash: 82d5fd8c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f63f2ae0fa5319f246ad59d82927c2ad707f20092e6b32af71a1ef8a06307d39,PodSandboxId:3c389c5244238f1502f663a622edcfcdb39c842cc4f2ae8928f4e315e184c244,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,State:CONTAINER_RUNNING,CreatedAt:1712789683719979983,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82910e12df56feafb80402f4702155af,},Annotations:map[string]string{io.kubernetes.container.hash: 16cce62d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fadf0431cad5782a96391f4d14bd31409f9f925c9e8eedcd6ab3b49a064480,PodSandboxId:25b1516115f454d2e578c2f96caaaf77dfdf11228328346a5e7cd260067cd299,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090,State:CONTAINER_RUNNING,CreatedAt:1712789683641924803,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77206909f47e74b9e84d7a2b5eedaafc,},Annotations:map[string]string{io.kubernetes.container.hash: 558a0b01,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3638946755f9e70fdb9934d30a1922abe47fed13817278575a833f856edca95,PodSandboxId:552181e571efbb16150f1f7d7ef33924726c87eebc816065701e2533cfc0e011,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b,State:CONTAINER_RUNNING,CreatedAt:1712789683649144649,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b23d99268ec85dfc255b89a65a2b7a6,},Annotations:map[string]string{io.kubernetes.container.hash: 9c2c6ac3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d763c490e6c5df0e625305c075d661241fa8d19dcca80f810ba34f1696f93e,PodSandboxId:3d26f66a41926c5e65c921e6568a934c5685981497e4ea29c9426bc6a5c737ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,State:CONTAINER_EXITED,CreatedAt:1712789389129474602,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82910e12df56feafb80402f4702155af,},Annotations:map[string]string{io.kubernetes.container.hash: 16cce62d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b047e04-d3a3-4ee9-a561-33930212bdf0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:08:15 no-preload-646133 crio[725]: time="2024-04-10 23:08:15.375572387Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f05cb94c-86dd-407c-a022-b4e6f1ef3c14 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:08:15 no-preload-646133 crio[725]: time="2024-04-10 23:08:15.375725953Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f05cb94c-86dd-407c-a022-b4e6f1ef3c14 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:08:15 no-preload-646133 crio[725]: time="2024-04-10 23:08:15.377327009Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=74e41adc-390b-4c26-b8ca-5a2c549e1767 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:08:15 no-preload-646133 crio[725]: time="2024-04-10 23:08:15.377736668Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712790495377714872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97389,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74e41adc-390b-4c26-b8ca-5a2c549e1767 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:08:15 no-preload-646133 crio[725]: time="2024-04-10 23:08:15.378342946Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9010efe-0646-41a2-9b91-51b7859d4208 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:08:15 no-preload-646133 crio[725]: time="2024-04-10 23:08:15.378395117Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9010efe-0646-41a2-9b91-51b7859d4208 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:08:15 no-preload-646133 crio[725]: time="2024-04-10 23:08:15.378667184Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec8d0d02104473d1b74d3f7cdd550cd9c1329263c9ae211f5d79d32a15895ae0,PodSandboxId:98aa9bcbe6e4737a4357fc234ee3619f5386c9435af0024c874ff0a61830d06d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789704759074785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v599p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f30c2827-5930-41d4-82b7-edfb839b3a74,},Annotations:map[string]string{io.kubernetes.container.hash: fdf46a31,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6e28085ff85adfb86327f333e1bfd9473635076de9a2742d0d7db843b0332df,PodSandboxId:41c42efa2a202eb5275bd92b43d655e2d97ce89294a073eb67f34163a410bf1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712789704770245179,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm2zw,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9d8b995c-717e-43a5-a963-f07a4f7a76a8,},Annotations:map[string]string{io.kubernetes.container.hash: 20d22dca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a76d2c57e073bd9bc6ada95b65d50ec62897e37c5bceb09a83810b1013edc46,PodSandboxId:312e184bed65496636b4cf4bd275dde4ae1e62b7853d9b3ac120d7979d80980c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:69f89e5e13a4119f8752cd7f0fb30e3db4ce480a5e08580a3bf72597464bd061,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:69f89e5e13a4119f8752cd7f0fb30e3db4ce480a5e08580a3bf72597464bd061,State:CONTAINER_RUNNIN
G,CreatedAt:1712789704681414763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-24vhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca175e85-76f2-47d2-91a5-0248194a88e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3b62c1d4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6174309d2279bfc9949db4340f399294d0a6a8247adb8e4de618f5facb06854,PodSandboxId:5c61163b7108bf85d4537d8c77f569e0131b69953317227f9772345f55bbc2c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171278970433
8297649,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3232daa9-da88-4152-97c8-e86b3d50b0b8,},Annotations:map[string]string{io.kubernetes.container.hash: cbcd7332,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6a95b9d5058af0971fbe9adf827d0108e8ff6b55f972a8b472a87281cd5c8b3,PodSandboxId:23d5696225b73cd34b393dcbb17c06fffad8e530ba7eb26fe7c01152a53e47d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712789683747194956,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9048145b75d9f795053d905e2e8df6b,},Annotations:map[string]string{io.kubernetes.container.hash: 82d5fd8c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f63f2ae0fa5319f246ad59d82927c2ad707f20092e6b32af71a1ef8a06307d39,PodSandboxId:3c389c5244238f1502f663a622edcfcdb39c842cc4f2ae8928f4e315e184c244,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,State:CONTAINER_RUNNING,CreatedAt:1712789683719979983,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82910e12df56feafb80402f4702155af,},Annotations:map[string]string{io.kubernetes.container.hash: 16cce62d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fadf0431cad5782a96391f4d14bd31409f9f925c9e8eedcd6ab3b49a064480,PodSandboxId:25b1516115f454d2e578c2f96caaaf77dfdf11228328346a5e7cd260067cd299,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090,State:CONTAINER_RUNNING,CreatedAt:1712789683641924803,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77206909f47e74b9e84d7a2b5eedaafc,},Annotations:map[string]string{io.kubernetes.container.hash: 558a0b01,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3638946755f9e70fdb9934d30a1922abe47fed13817278575a833f856edca95,PodSandboxId:552181e571efbb16150f1f7d7ef33924726c87eebc816065701e2533cfc0e011,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b,State:CONTAINER_RUNNING,CreatedAt:1712789683649144649,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b23d99268ec85dfc255b89a65a2b7a6,},Annotations:map[string]string{io.kubernetes.container.hash: 9c2c6ac3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d763c490e6c5df0e625305c075d661241fa8d19dcca80f810ba34f1696f93e,PodSandboxId:3d26f66a41926c5e65c921e6568a934c5685981497e4ea29c9426bc6a5c737ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895,State:CONTAINER_EXITED,CreatedAt:1712789389129474602,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-646133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82910e12df56feafb80402f4702155af,},Annotations:map[string]string{io.kubernetes.container.hash: 16cce62d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f9010efe-0646-41a2-9b91-51b7859d4208 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e6e28085ff85a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 minutes ago      Running             coredns                   0                   41c42efa2a202       coredns-7db6d8ff4d-jm2zw
	ec8d0d0210447       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 minutes ago      Running             coredns                   0                   98aa9bcbe6e47       coredns-7db6d8ff4d-v599p
	4a76d2c57e073       69f89e5e13a4119f8752cd7f0fb30e3db4ce480a5e08580a3bf72597464bd061   13 minutes ago      Running             kube-proxy                0                   312e184bed654       kube-proxy-24vhc
	d6174309d2279       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   5c61163b7108b       storage-provisioner
	e6a95b9d5058a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   13 minutes ago      Running             etcd                      2                   23d5696225b73       etcd-no-preload-646133
	f63f2ae0fa531       bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895   13 minutes ago      Running             kube-apiserver            2                   3c389c5244238       kube-apiserver-no-preload-646133
	f3638946755f9       ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b   13 minutes ago      Running             kube-scheduler            2                   552181e571efb       kube-scheduler-no-preload-646133
	60fadf0431cad       577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090   13 minutes ago      Running             kube-controller-manager   2                   25b1516115f45       kube-controller-manager-no-preload-646133
	71d763c490e6c       bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895   18 minutes ago      Exited              kube-apiserver            1                   3d26f66a41926       kube-apiserver-no-preload-646133
	
	
	==> coredns [e6e28085ff85adfb86327f333e1bfd9473635076de9a2742d0d7db843b0332df] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ec8d0d02104473d1b74d3f7cdd550cd9c1329263c9ae211f5d79d32a15895ae0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-646133
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-646133
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2
	                    minikube.k8s.io/name=no-preload-646133
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_10T22_54_49_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Apr 2024 22:54:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-646133
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Apr 2024 23:08:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Apr 2024 23:05:22 +0000   Wed, 10 Apr 2024 22:54:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Apr 2024 23:05:22 +0000   Wed, 10 Apr 2024 22:54:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Apr 2024 23:05:22 +0000   Wed, 10 Apr 2024 22:54:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Apr 2024 23:05:22 +0000   Wed, 10 Apr 2024 22:54:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.17
	  Hostname:    no-preload-646133
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d8efe7b83d024249b9b4267a60de5316
	  System UUID:                d8efe7b8-3d02-4249-b9b4-267a60de5316
	  Boot ID:                    6711f87d-c85c-484a-a5ca-3dbae181297c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.1
	  Kube-Proxy Version:         v1.30.0-rc.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-jm2zw                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-v599p                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-no-preload-646133                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-no-preload-646133             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-no-preload-646133    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-24vhc                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-no-preload-646133             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-569cc877fc-bj59f              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node no-preload-646133 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node no-preload-646133 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node no-preload-646133 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m   node-controller  Node no-preload-646133 event: Registered Node no-preload-646133 in Controller
	
	
	==> dmesg <==
	[  +0.054370] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042767] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.902286] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.003041] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.648525] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.457128] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.062454] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.082338] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.168635] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.133630] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.293343] systemd-fstab-generator[709]: Ignoring "noauto" option for root device
	[ +17.290979] systemd-fstab-generator[1223]: Ignoring "noauto" option for root device
	[  +0.062501] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.355108] systemd-fstab-generator[1347]: Ignoring "noauto" option for root device
	[  +4.656273] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.709023] kauditd_printk_skb: 79 callbacks suppressed
	[Apr10 22:54] kauditd_printk_skb: 8 callbacks suppressed
	[  +1.343083] systemd-fstab-generator[3991]: Ignoring "noauto" option for root device
	[  +6.554737] systemd-fstab-generator[4313]: Ignoring "noauto" option for root device
	[  +0.091931] kauditd_printk_skb: 54 callbacks suppressed
	[Apr10 22:55] systemd-fstab-generator[4515]: Ignoring "noauto" option for root device
	[  +0.118390] kauditd_printk_skb: 12 callbacks suppressed
	[Apr10 22:56] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [e6a95b9d5058af0971fbe9adf827d0108e8ff6b55f972a8b472a87281cd5c8b3] <==
	{"level":"info","ts":"2024-04-10T22:54:44.143402Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"a74ab9f845be4a88","initial-advertise-peer-urls":["https://192.168.50.17:2380"],"listen-peer-urls":["https://192.168.50.17:2380"],"advertise-client-urls":["https://192.168.50.17:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.17:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-10T22:54:44.143471Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-10T22:54:44.143753Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.17:2380"}
	{"level":"info","ts":"2024-04-10T22:54:44.143792Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.17:2380"}
	{"level":"info","ts":"2024-04-10T22:54:44.189646Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a74ab9f845be4a88 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-10T22:54:44.189826Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a74ab9f845be4a88 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-10T22:54:44.189859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a74ab9f845be4a88 received MsgPreVoteResp from a74ab9f845be4a88 at term 1"}
	{"level":"info","ts":"2024-04-10T22:54:44.190101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a74ab9f845be4a88 became candidate at term 2"}
	{"level":"info","ts":"2024-04-10T22:54:44.190209Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a74ab9f845be4a88 received MsgVoteResp from a74ab9f845be4a88 at term 2"}
	{"level":"info","ts":"2024-04-10T22:54:44.190243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a74ab9f845be4a88 became leader at term 2"}
	{"level":"info","ts":"2024-04-10T22:54:44.190317Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a74ab9f845be4a88 elected leader a74ab9f845be4a88 at term 2"}
	{"level":"info","ts":"2024-04-10T22:54:44.195057Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-10T22:54:44.195977Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"a74ab9f845be4a88","local-member-attributes":"{Name:no-preload-646133 ClientURLs:[https://192.168.50.17:2379]}","request-path":"/0/members/a74ab9f845be4a88/attributes","cluster-id":"e7a7808069af5882","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-10T22:54:44.196203Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-10T22:54:44.200618Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-10T22:54:44.200736Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-10T22:54:44.196487Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-10T22:54:44.196672Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e7a7808069af5882","local-member-id":"a74ab9f845be4a88","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-10T22:54:44.201228Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-10T22:54:44.201285Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-10T22:54:44.206873Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.17:2379"}
	{"level":"info","ts":"2024-04-10T22:54:44.259736Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-10T23:04:44.685838Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":675}
	{"level":"info","ts":"2024-04-10T23:04:44.696834Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":675,"took":"10.206032ms","hash":3079016918,"current-db-size-bytes":2191360,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2191360,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-04-10T23:04:44.696984Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3079016918,"revision":675,"compact-revision":-1}
	
	
	==> kernel <==
	 23:08:15 up 19 min,  0 users,  load average: 0.15, 0.22, 0.18
	Linux no-preload-646133 5.10.207 #1 SMP Wed Apr 10 14:57:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [71d763c490e6c5df0e625305c075d661241fa8d19dcca80f810ba34f1696f93e] <==
	W0410 22:54:35.654229       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:35.662146       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:35.668749       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:35.733350       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:35.738034       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:35.846621       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:35.846945       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:35.867725       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:35.958048       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:36.023135       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:36.086929       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:36.104877       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:36.160055       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:36.338403       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:36.380038       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:36.395103       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:36.483037       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:36.579987       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:36.697149       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:36.781862       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:36.960485       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:37.121992       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:37.137335       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:37.222690       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0410 22:54:37.339828       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f63f2ae0fa5319f246ad59d82927c2ad707f20092e6b32af71a1ef8a06307d39] <==
	I0410 23:02:47.465681       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0410 23:04:46.468752       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 23:04:46.468928       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0410 23:04:47.469904       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 23:04:47.470102       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	W0410 23:04:47.470037       1 handler_proxy.go:93] no RequestInfo found in the context
	I0410 23:04:47.470142       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0410 23:04:47.470250       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0410 23:04:47.471567       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0410 23:05:47.471053       1 handler_proxy.go:93] no RequestInfo found in the context
	W0410 23:05:47.472005       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 23:05:47.472088       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0410 23:05:47.472323       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0410 23:05:47.472470       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0410 23:05:47.474446       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0410 23:07:47.473038       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 23:07:47.473118       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0410 23:07:47.473130       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0410 23:07:47.474600       1 handler_proxy.go:93] no RequestInfo found in the context
	E0410 23:07:47.474698       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0410 23:07:47.474734       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [60fadf0431cad5782a96391f4d14bd31409f9f925c9e8eedcd6ab3b49a064480] <==
	I0410 23:02:32.522087       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:03:02.057774       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:03:02.532716       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:03:32.064350       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:03:32.541221       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:04:02.070393       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:04:02.549380       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:04:32.077660       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:04:32.558599       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:05:02.083810       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:05:02.567049       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:05:32.090766       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:05:32.575545       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:06:02.096652       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:06:02.587649       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0410 23:06:06.247216       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="240.899µs"
	I0410 23:06:18.249012       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="92.655µs"
	E0410 23:06:32.102861       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:06:32.596770       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:07:02.109810       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:07:02.606729       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:07:32.116317       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:07:32.615682       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0410 23:08:02.125292       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0410 23:08:02.626382       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [4a76d2c57e073bd9bc6ada95b65d50ec62897e37c5bceb09a83810b1013edc46] <==
	I0410 22:55:05.170138       1 server_linux.go:69] "Using iptables proxy"
	I0410 22:55:05.194321       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.17"]
	I0410 22:55:05.246241       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0410 22:55:05.246404       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0410 22:55:05.246444       1 server_linux.go:165] "Using iptables Proxier"
	I0410 22:55:05.249897       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0410 22:55:05.250143       1 server.go:872] "Version info" version="v1.30.0-rc.1"
	I0410 22:55:05.250190       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0410 22:55:05.252259       1 config.go:192] "Starting service config controller"
	I0410 22:55:05.252314       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0410 22:55:05.252366       1 config.go:101] "Starting endpoint slice config controller"
	I0410 22:55:05.252382       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0410 22:55:05.254571       1 config.go:319] "Starting node config controller"
	I0410 22:55:05.254622       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0410 22:55:05.352928       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0410 22:55:05.352993       1 shared_informer.go:320] Caches are synced for service config
	I0410 22:55:05.354974       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f3638946755f9e70fdb9934d30a1922abe47fed13817278575a833f856edca95] <==
	E0410 22:54:46.504473       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0410 22:54:46.504559       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0410 22:54:46.504681       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0410 22:54:46.504794       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0410 22:54:47.343668       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0410 22:54:47.343725       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0410 22:54:47.374045       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0410 22:54:47.374107       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0410 22:54:47.465812       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0410 22:54:47.466057       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0410 22:54:47.474841       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0410 22:54:47.474964       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0410 22:54:47.505477       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0410 22:54:47.505980       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0410 22:54:47.569782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0410 22:54:47.570083       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0410 22:54:47.644334       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0410 22:54:47.644408       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0410 22:54:47.644459       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0410 22:54:47.644541       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0410 22:54:47.681162       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0410 22:54:47.683351       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0410 22:54:47.991021       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0410 22:54:47.991079       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0410 22:54:50.257400       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 10 23:05:49 no-preload-646133 kubelet[4320]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 10 23:05:52 no-preload-646133 kubelet[4320]: E0410 23:05:52.242784    4320 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Apr 10 23:05:52 no-preload-646133 kubelet[4320]: E0410 23:05:52.242851    4320 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Apr 10 23:05:52 no-preload-646133 kubelet[4320]: E0410 23:05:52.243067    4320 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t55j7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,Recurs
iveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false
,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-bj59f_kube-system(4aace435-90be-456a-8a85-dbee0026212c): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Apr 10 23:05:52 no-preload-646133 kubelet[4320]: E0410 23:05:52.243107    4320 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-bj59f" podUID="4aace435-90be-456a-8a85-dbee0026212c"
	Apr 10 23:06:06 no-preload-646133 kubelet[4320]: E0410 23:06:06.229152    4320 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bj59f" podUID="4aace435-90be-456a-8a85-dbee0026212c"
	Apr 10 23:06:18 no-preload-646133 kubelet[4320]: E0410 23:06:18.229047    4320 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bj59f" podUID="4aace435-90be-456a-8a85-dbee0026212c"
	Apr 10 23:06:33 no-preload-646133 kubelet[4320]: E0410 23:06:33.231750    4320 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bj59f" podUID="4aace435-90be-456a-8a85-dbee0026212c"
	Apr 10 23:06:48 no-preload-646133 kubelet[4320]: E0410 23:06:48.228822    4320 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bj59f" podUID="4aace435-90be-456a-8a85-dbee0026212c"
	Apr 10 23:06:49 no-preload-646133 kubelet[4320]: E0410 23:06:49.255135    4320 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 10 23:06:49 no-preload-646133 kubelet[4320]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 10 23:06:49 no-preload-646133 kubelet[4320]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 10 23:06:49 no-preload-646133 kubelet[4320]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 10 23:06:49 no-preload-646133 kubelet[4320]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 10 23:07:03 no-preload-646133 kubelet[4320]: E0410 23:07:03.229176    4320 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bj59f" podUID="4aace435-90be-456a-8a85-dbee0026212c"
	Apr 10 23:07:15 no-preload-646133 kubelet[4320]: E0410 23:07:15.229955    4320 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bj59f" podUID="4aace435-90be-456a-8a85-dbee0026212c"
	Apr 10 23:07:29 no-preload-646133 kubelet[4320]: E0410 23:07:29.232701    4320 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bj59f" podUID="4aace435-90be-456a-8a85-dbee0026212c"
	Apr 10 23:07:42 no-preload-646133 kubelet[4320]: E0410 23:07:42.228630    4320 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bj59f" podUID="4aace435-90be-456a-8a85-dbee0026212c"
	Apr 10 23:07:49 no-preload-646133 kubelet[4320]: E0410 23:07:49.253377    4320 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 10 23:07:49 no-preload-646133 kubelet[4320]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 10 23:07:49 no-preload-646133 kubelet[4320]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 10 23:07:49 no-preload-646133 kubelet[4320]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 10 23:07:49 no-preload-646133 kubelet[4320]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 10 23:07:55 no-preload-646133 kubelet[4320]: E0410 23:07:55.231344    4320 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bj59f" podUID="4aace435-90be-456a-8a85-dbee0026212c"
	Apr 10 23:08:10 no-preload-646133 kubelet[4320]: E0410 23:08:10.229097    4320 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-bj59f" podUID="4aace435-90be-456a-8a85-dbee0026212c"
	
	
	==> storage-provisioner [d6174309d2279bfc9949db4340f399294d0a6a8247adb8e4de618f5facb06854] <==
	I0410 22:55:04.822158       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0410 22:55:04.880121       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0410 22:55:04.884929       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0410 22:55:04.918007       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0410 22:55:04.918289       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-646133_b89aa634-ed5c-460a-8459-c995874103cc!
	I0410 22:55:04.918941       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"724620de-3bae-438f-81ec-b58b460a9711", APIVersion:"v1", ResourceVersion:"397", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-646133_b89aa634-ed5c-460a-8459-c995874103cc became leader
	I0410 22:55:05.018613       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-646133_b89aa634-ed5c-460a-8459-c995874103cc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-646133 -n no-preload-646133
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-646133 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-bj59f
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-646133 describe pod metrics-server-569cc877fc-bj59f
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-646133 describe pod metrics-server-569cc877fc-bj59f: exit status 1 (62.372178ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-bj59f" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-646133 describe pod metrics-server-569cc877fc-bj59f: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (244.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (130.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
E0410 23:06:54.112443   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
E0410 23:06:59.610276   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.178:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.178:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-862528 -n old-k8s-version-862528
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-862528 -n old-k8s-version-862528: exit status 2 (253.029643ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-862528" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-862528 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-862528 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.609µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-862528 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-862528 -n old-k8s-version-862528
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-862528 -n old-k8s-version-862528: exit status 2 (236.565196ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-862528 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-862528 logs -n 25: (1.606162413s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| stop    | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC | 10 Apr 24 22:40 UTC |
	| start   | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC | 10 Apr 24 22:40 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.1                      |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-646133             | no-preload-646133            | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC | 10 Apr 24 22:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-646133                                   | no-preload-646133            | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:40 UTC | 10 Apr 24 22:41 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.1                      |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p kubernetes-upgrade-407031                           | kubernetes-upgrade-407031    | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:41 UTC | 10 Apr 24 22:41 UTC |
	| start   | -p embed-certs-706500                                  | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:41 UTC | 10 Apr 24 22:42 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-706500            | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:42 UTC | 10 Apr 24 22:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-706500                                  | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-862528        | old-k8s-version-862528       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:42 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-646133                  | no-preload-646133            | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| delete  | -p cert-expiration-464519                              | cert-expiration-464519       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:43 UTC |
	| delete  | -p                                                     | disable-driver-mounts-676292 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:43 UTC |
	|         | disable-driver-mounts-676292                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:44 UTC |
	|         | default-k8s-diff-port-519831                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| start   | -p no-preload-646133                                   | no-preload-646133            | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:55 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.1                      |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-862528                              | old-k8s-version-862528       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:43 UTC | 10 Apr 24 22:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-862528             | old-k8s-version-862528       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC | 10 Apr 24 22:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-862528                              | old-k8s-version-862528       | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-519831  | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC | 10 Apr 24 22:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC |                     |
	|         | default-k8s-diff-port-519831                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-706500                 | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-706500                                  | embed-certs-706500           | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:44 UTC | 10 Apr 24 22:54 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-519831       | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-519831 | jenkins | v1.33.0-beta.0 | 10 Apr 24 22:46 UTC | 10 Apr 24 22:53 UTC |
	|         | default-k8s-diff-port-519831                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/10 22:46:47
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0410 22:46:47.395706   58701 out.go:291] Setting OutFile to fd 1 ...
	I0410 22:46:47.395991   58701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:46:47.396002   58701 out.go:304] Setting ErrFile to fd 2...
	I0410 22:46:47.396019   58701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:46:47.396208   58701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 22:46:47.396802   58701 out.go:298] Setting JSON to false
	I0410 22:46:47.397726   58701 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5350,"bootTime":1712783858,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0410 22:46:47.397786   58701 start.go:139] virtualization: kvm guest
	I0410 22:46:47.400191   58701 out.go:177] * [default-k8s-diff-port-519831] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0410 22:46:47.401578   58701 notify.go:220] Checking for updates...
	I0410 22:46:47.402880   58701 out.go:177]   - MINIKUBE_LOCATION=18610
	I0410 22:46:47.404311   58701 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0410 22:46:47.405790   58701 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:46:47.407012   58701 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 22:46:47.408130   58701 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0410 22:46:47.409497   58701 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0410 22:46:47.411183   58701 config.go:182] Loaded profile config "default-k8s-diff-port-519831": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:46:47.411591   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:46:47.411632   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:46:47.426322   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42887
	I0410 22:46:47.426759   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:46:47.427345   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:46:47.427366   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:46:47.427716   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:46:47.427926   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:46:47.428221   58701 driver.go:392] Setting default libvirt URI to qemu:///system
	I0410 22:46:47.428646   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:46:47.428696   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:46:47.444105   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40099
	I0410 22:46:47.444537   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:46:47.445035   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:46:47.445058   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:46:47.445398   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:46:47.445592   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:46:47.480451   58701 out.go:177] * Using the kvm2 driver based on existing profile
	I0410 22:46:47.481837   58701 start.go:297] selected driver: kvm2
	I0410 22:46:47.481852   58701 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-519831 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-519831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.170 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:46:47.481985   58701 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0410 22:46:47.482657   58701 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 22:46:47.482750   58701 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18610-5679/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0410 22:46:47.498330   58701 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0410 22:46:47.498668   58701 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 22:46:47.498735   58701 cni.go:84] Creating CNI manager for ""
	I0410 22:46:47.498748   58701 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:46:47.498784   58701 start.go:340] cluster config:
	{Name:default-k8s-diff-port-519831 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-519831 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.170 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:46:47.498877   58701 iso.go:125] acquiring lock: {Name:mk5a268a8a4dbf02629446a098c1de60574dce10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 22:46:47.500723   58701 out.go:177] * Starting "default-k8s-diff-port-519831" primary control-plane node in "default-k8s-diff-port-519831" cluster
	I0410 22:46:47.180678   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:46:47.501967   58701 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 22:46:47.502009   58701 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0410 22:46:47.502030   58701 cache.go:56] Caching tarball of preloaded images
	I0410 22:46:47.502108   58701 preload.go:173] Found /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0410 22:46:47.502118   58701 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0410 22:46:47.502202   58701 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/config.json ...
	I0410 22:46:47.502366   58701 start.go:360] acquireMachinesLock for default-k8s-diff-port-519831: {Name:mkcfcfa8b01b63726303cd37d33f01d452118635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0410 22:46:50.252732   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:46:56.332647   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:46:59.404660   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:05.484717   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:08.556632   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:14.636753   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:17.708788   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:23.788661   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:26.860683   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:32.940630   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:36.012689   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:42.092749   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:45.164706   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:51.244682   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:47:54.316652   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:48:00.396637   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:48:03.468672   57270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.17:22: connect: no route to host
	I0410 22:48:06.472768   57719 start.go:364] duration metric: took 4m5.937893783s to acquireMachinesLock for "old-k8s-version-862528"
	I0410 22:48:06.472833   57719 start.go:96] Skipping create...Using existing machine configuration
	I0410 22:48:06.472852   57719 fix.go:54] fixHost starting: 
	I0410 22:48:06.473157   57719 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:48:06.473186   57719 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:48:06.488728   57719 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41927
	I0410 22:48:06.489157   57719 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:48:06.489590   57719 main.go:141] libmachine: Using API Version  1
	I0410 22:48:06.489612   57719 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:48:06.490011   57719 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:48:06.490171   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:06.490337   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetState
	I0410 22:48:06.491997   57719 fix.go:112] recreateIfNeeded on old-k8s-version-862528: state=Stopped err=<nil>
	I0410 22:48:06.492030   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	W0410 22:48:06.492234   57719 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 22:48:06.493891   57719 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-862528" ...
	I0410 22:48:06.469869   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:48:06.469904   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetMachineName
	I0410 22:48:06.470235   57270 buildroot.go:166] provisioning hostname "no-preload-646133"
	I0410 22:48:06.470261   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetMachineName
	I0410 22:48:06.470529   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:48:06.472589   57270 machine.go:97] duration metric: took 4m35.561692081s to provisionDockerMachine
	I0410 22:48:06.472636   57270 fix.go:56] duration metric: took 4m35.586484815s for fixHost
	I0410 22:48:06.472646   57270 start.go:83] releasing machines lock for "no-preload-646133", held for 4m35.586540892s
	W0410 22:48:06.472671   57270 start.go:713] error starting host: provision: host is not running
	W0410 22:48:06.472773   57270 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0410 22:48:06.472785   57270 start.go:728] Will try again in 5 seconds ...
	I0410 22:48:06.495233   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .Start
	I0410 22:48:06.495416   57719 main.go:141] libmachine: (old-k8s-version-862528) Ensuring networks are active...
	I0410 22:48:06.496254   57719 main.go:141] libmachine: (old-k8s-version-862528) Ensuring network default is active
	I0410 22:48:06.496589   57719 main.go:141] libmachine: (old-k8s-version-862528) Ensuring network mk-old-k8s-version-862528 is active
	I0410 22:48:06.497002   57719 main.go:141] libmachine: (old-k8s-version-862528) Getting domain xml...
	I0410 22:48:06.497751   57719 main.go:141] libmachine: (old-k8s-version-862528) Creating domain...
	I0410 22:48:07.722703   57719 main.go:141] libmachine: (old-k8s-version-862528) Waiting to get IP...
	I0410 22:48:07.723942   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:07.724373   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:07.724451   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:07.724338   59021 retry.go:31] will retry after 284.455366ms: waiting for machine to come up
	I0410 22:48:08.011077   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:08.011598   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:08.011628   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:08.011545   59021 retry.go:31] will retry after 337.946102ms: waiting for machine to come up
	I0410 22:48:08.351219   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:08.351725   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:08.351744   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:08.351691   59021 retry.go:31] will retry after 454.774669ms: waiting for machine to come up
	I0410 22:48:08.808516   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:08.808953   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:08.808991   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:08.808893   59021 retry.go:31] will retry after 484.667282ms: waiting for machine to come up
	I0410 22:48:09.295665   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:09.296127   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:09.296148   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:09.296083   59021 retry.go:31] will retry after 515.00238ms: waiting for machine to come up
	I0410 22:48:09.812855   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:09.813337   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:09.813362   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:09.813289   59021 retry.go:31] will retry after 596.67118ms: waiting for machine to come up
	I0410 22:48:10.411103   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:10.411616   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:10.411640   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:10.411568   59021 retry.go:31] will retry after 1.035822512s: waiting for machine to come up
	I0410 22:48:11.473748   57270 start.go:360] acquireMachinesLock for no-preload-646133: {Name:mkcfcfa8b01b63726303cd37d33f01d452118635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0410 22:48:11.448894   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:11.449358   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:11.449388   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:11.449315   59021 retry.go:31] will retry after 1.258446774s: waiting for machine to come up
	I0410 22:48:12.709048   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:12.709587   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:12.709618   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:12.709530   59021 retry.go:31] will retry after 1.149380432s: waiting for machine to come up
	I0410 22:48:13.860550   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:13.861084   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:13.861110   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:13.861028   59021 retry.go:31] will retry after 1.733388735s: waiting for machine to come up
	I0410 22:48:15.595870   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:15.596447   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:15.596487   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:15.596343   59021 retry.go:31] will retry after 2.536794123s: waiting for machine to come up
	I0410 22:48:18.135592   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:18.136099   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:18.136128   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:18.136056   59021 retry.go:31] will retry after 3.390395523s: waiting for machine to come up
	I0410 22:48:21.528518   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:21.528964   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | unable to find current IP address of domain old-k8s-version-862528 in network mk-old-k8s-version-862528
	I0410 22:48:21.529008   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | I0410 22:48:21.528906   59021 retry.go:31] will retry after 4.165145769s: waiting for machine to come up
	I0410 22:48:26.977460   58186 start.go:364] duration metric: took 3m29.815175662s to acquireMachinesLock for "embed-certs-706500"
	I0410 22:48:26.977524   58186 start.go:96] Skipping create...Using existing machine configuration
	I0410 22:48:26.977532   58186 fix.go:54] fixHost starting: 
	I0410 22:48:26.977935   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:48:26.977965   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:48:26.994175   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36291
	I0410 22:48:26.994552   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:48:26.995016   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:48:26.995040   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:48:26.995447   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:48:26.995652   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:26.995826   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetState
	I0410 22:48:26.997547   58186 fix.go:112] recreateIfNeeded on embed-certs-706500: state=Stopped err=<nil>
	I0410 22:48:26.997580   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	W0410 22:48:26.997902   58186 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 22:48:27.000500   58186 out.go:177] * Restarting existing kvm2 VM for "embed-certs-706500" ...
	I0410 22:48:27.002204   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Start
	I0410 22:48:27.002398   58186 main.go:141] libmachine: (embed-certs-706500) Ensuring networks are active...
	I0410 22:48:27.003133   58186 main.go:141] libmachine: (embed-certs-706500) Ensuring network default is active
	I0410 22:48:27.003465   58186 main.go:141] libmachine: (embed-certs-706500) Ensuring network mk-embed-certs-706500 is active
	I0410 22:48:27.003863   58186 main.go:141] libmachine: (embed-certs-706500) Getting domain xml...
	I0410 22:48:27.004603   58186 main.go:141] libmachine: (embed-certs-706500) Creating domain...
	I0410 22:48:25.699595   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.700129   57719 main.go:141] libmachine: (old-k8s-version-862528) Found IP for machine: 192.168.61.178
	I0410 22:48:25.700159   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has current primary IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.700166   57719 main.go:141] libmachine: (old-k8s-version-862528) Reserving static IP address...
	I0410 22:48:25.700654   57719 main.go:141] libmachine: (old-k8s-version-862528) Reserved static IP address: 192.168.61.178
	I0410 22:48:25.700676   57719 main.go:141] libmachine: (old-k8s-version-862528) Waiting for SSH to be available...
	I0410 22:48:25.700704   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "old-k8s-version-862528", mac: "52:54:00:d0:b7:c9", ip: "192.168.61.178"} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.700732   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | skip adding static IP to network mk-old-k8s-version-862528 - found existing host DHCP lease matching {name: "old-k8s-version-862528", mac: "52:54:00:d0:b7:c9", ip: "192.168.61.178"}
	I0410 22:48:25.700745   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | Getting to WaitForSSH function...
	I0410 22:48:25.702929   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.703290   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.703322   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.703490   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | Using SSH client type: external
	I0410 22:48:25.703519   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | Using SSH private key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa (-rw-------)
	I0410 22:48:25.703551   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.178 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0410 22:48:25.703590   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | About to run SSH command:
	I0410 22:48:25.703635   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | exit 0
	I0410 22:48:25.832738   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | SSH cmd err, output: <nil>: 
	I0410 22:48:25.833133   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetConfigRaw
	I0410 22:48:25.833784   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetIP
	I0410 22:48:25.836323   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.836874   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.836908   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.837156   57719 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/config.json ...
	I0410 22:48:25.837472   57719 machine.go:94] provisionDockerMachine start ...
	I0410 22:48:25.837502   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:25.837710   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:25.840159   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.840488   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.840516   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.840593   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:25.840815   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:25.840992   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:25.841134   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:25.841337   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:25.841543   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:25.841556   57719 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 22:48:25.957153   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0410 22:48:25.957189   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetMachineName
	I0410 22:48:25.957438   57719 buildroot.go:166] provisioning hostname "old-k8s-version-862528"
	I0410 22:48:25.957461   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetMachineName
	I0410 22:48:25.957679   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:25.960779   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.961149   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:25.961184   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:25.961332   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:25.961546   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:25.961689   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:25.961864   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:25.962020   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:25.962196   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:25.962207   57719 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-862528 && echo "old-k8s-version-862528" | sudo tee /etc/hostname
	I0410 22:48:26.087073   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-862528
	
	I0410 22:48:26.087099   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.089770   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.090109   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.090140   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.090261   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.090446   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.090623   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.090760   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.090951   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:26.091131   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:26.091155   57719 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-862528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-862528/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-862528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:48:26.214422   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:48:26.214462   57719 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:48:26.214490   57719 buildroot.go:174] setting up certificates
	I0410 22:48:26.214498   57719 provision.go:84] configureAuth start
	I0410 22:48:26.214509   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetMachineName
	I0410 22:48:26.214793   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetIP
	I0410 22:48:26.217463   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.217809   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.217850   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.217975   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.219971   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.220235   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.220265   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.220480   57719 provision.go:143] copyHostCerts
	I0410 22:48:26.220526   57719 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:48:26.220542   57719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:48:26.220604   57719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:48:26.220703   57719 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:48:26.220712   57719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:48:26.220736   57719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:48:26.220789   57719 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:48:26.220796   57719 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:48:26.220817   57719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:48:26.220864   57719 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-862528 san=[127.0.0.1 192.168.61.178 localhost minikube old-k8s-version-862528]
	I0410 22:48:26.288372   57719 provision.go:177] copyRemoteCerts
	I0410 22:48:26.288445   57719 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:48:26.288468   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.290980   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.291298   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.291339   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.291444   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.291635   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.291809   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.291927   57719 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa Username:docker}
	I0410 22:48:26.379823   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:48:26.405285   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0410 22:48:26.430122   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0410 22:48:26.456124   57719 provision.go:87] duration metric: took 241.614364ms to configureAuth
	I0410 22:48:26.456154   57719 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:48:26.456356   57719 config.go:182] Loaded profile config "old-k8s-version-862528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0410 22:48:26.456480   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.459028   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.459335   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.459366   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.459558   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.459742   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.459888   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.460037   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.460211   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:26.460379   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:26.460413   57719 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:48:26.732588   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:48:26.732614   57719 machine.go:97] duration metric: took 895.122467ms to provisionDockerMachine
	I0410 22:48:26.732627   57719 start.go:293] postStartSetup for "old-k8s-version-862528" (driver="kvm2")
	I0410 22:48:26.732641   57719 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:48:26.732679   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.733014   57719 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:48:26.733044   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.735820   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.736217   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.736244   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.736418   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.736630   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.736840   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.737020   57719 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa Username:docker}
	I0410 22:48:26.823452   57719 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:48:26.827806   57719 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:48:26.827827   57719 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:48:26.827899   57719 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:48:26.828009   57719 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:48:26.828122   57719 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:48:26.837564   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:48:26.862278   57719 start.go:296] duration metric: took 129.638185ms for postStartSetup
	I0410 22:48:26.862325   57719 fix.go:56] duration metric: took 20.389482643s for fixHost
	I0410 22:48:26.862346   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.864911   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.865277   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.865301   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.865419   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.865597   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.865742   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.865872   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.866083   57719 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:26.866283   57719 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.178 22 <nil> <nil>}
	I0410 22:48:26.866300   57719 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 22:48:26.977317   57719 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712789306.948982315
	
	I0410 22:48:26.977337   57719 fix.go:216] guest clock: 1712789306.948982315
	I0410 22:48:26.977344   57719 fix.go:229] Guest: 2024-04-10 22:48:26.948982315 +0000 UTC Remote: 2024-04-10 22:48:26.862329953 +0000 UTC m=+266.486936912 (delta=86.652362ms)
	I0410 22:48:26.977362   57719 fix.go:200] guest clock delta is within tolerance: 86.652362ms
	I0410 22:48:26.977366   57719 start.go:83] releasing machines lock for "old-k8s-version-862528", held for 20.504554043s
	I0410 22:48:26.977386   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.977653   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetIP
	I0410 22:48:26.980035   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.980376   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.980419   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.980602   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.981224   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.981421   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .DriverName
	I0410 22:48:26.981516   57719 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:48:26.981558   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.981645   57719 ssh_runner.go:195] Run: cat /version.json
	I0410 22:48:26.981670   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHHostname
	I0410 22:48:26.984375   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.984568   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.984840   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.984868   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.984953   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.985030   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:26.985079   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:26.985118   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.985236   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHPort
	I0410 22:48:26.985277   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.985374   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHKeyPath
	I0410 22:48:26.985450   57719 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa Username:docker}
	I0410 22:48:26.985516   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetSSHUsername
	I0410 22:48:26.985635   57719 sshutil.go:53] new ssh client: &{IP:192.168.61.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/old-k8s-version-862528/id_rsa Username:docker}
	I0410 22:48:27.105002   57719 ssh_runner.go:195] Run: systemctl --version
	I0410 22:48:27.111205   57719 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:48:27.261678   57719 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 22:48:27.268336   57719 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:48:27.268423   57719 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:48:27.290099   57719 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0410 22:48:27.290122   57719 start.go:494] detecting cgroup driver to use...
	I0410 22:48:27.290174   57719 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:48:27.308787   57719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:48:27.325557   57719 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:48:27.325611   57719 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:48:27.340859   57719 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:48:27.355570   57719 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:48:27.479670   57719 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:48:27.653364   57719 docker.go:233] disabling docker service ...
	I0410 22:48:27.653424   57719 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:48:27.669775   57719 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:48:27.683654   57719 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:48:27.813212   57719 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:48:27.929620   57719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:48:27.946085   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:48:27.966341   57719 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0410 22:48:27.966404   57719 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:27.978022   57719 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:48:27.978111   57719 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:27.989324   57719 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:28.001429   57719 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:28.012965   57719 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:48:28.024663   57719 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:48:28.034362   57719 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0410 22:48:28.034423   57719 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0410 22:48:28.048740   57719 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:48:28.060698   57719 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:48:28.188526   57719 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:48:28.348442   57719 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 22:48:28.348523   57719 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 22:48:28.353501   57719 start.go:562] Will wait 60s for crictl version
	I0410 22:48:28.353566   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:28.357486   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 22:48:28.391138   57719 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 22:48:28.391221   57719 ssh_runner.go:195] Run: crio --version
	I0410 22:48:28.421399   57719 ssh_runner.go:195] Run: crio --version
	I0410 22:48:28.455851   57719 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0410 22:48:28.457534   57719 main.go:141] libmachine: (old-k8s-version-862528) Calling .GetIP
	I0410 22:48:28.460913   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:28.461297   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b7:c9", ip: ""} in network mk-old-k8s-version-862528: {Iface:virbr1 ExpiryTime:2024-04-10 23:48:17 +0000 UTC Type:0 Mac:52:54:00:d0:b7:c9 Iaid: IPaddr:192.168.61.178 Prefix:24 Hostname:old-k8s-version-862528 Clientid:01:52:54:00:d0:b7:c9}
	I0410 22:48:28.461323   57719 main.go:141] libmachine: (old-k8s-version-862528) DBG | domain old-k8s-version-862528 has defined IP address 192.168.61.178 and MAC address 52:54:00:d0:b7:c9 in network mk-old-k8s-version-862528
	I0410 22:48:28.461558   57719 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0410 22:48:28.466450   57719 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:48:28.480549   57719 kubeadm.go:877] updating cluster {Name:old-k8s-version-862528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-862528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.178 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 22:48:28.480671   57719 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0410 22:48:28.480775   57719 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:48:28.536971   57719 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0410 22:48:28.537034   57719 ssh_runner.go:195] Run: which lz4
	I0410 22:48:28.541757   57719 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0410 22:48:28.546381   57719 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0410 22:48:28.546413   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0410 22:48:30.411805   57719 crio.go:462] duration metric: took 1.870076139s to copy over tarball
	I0410 22:48:30.411900   57719 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0410 22:48:28.229217   58186 main.go:141] libmachine: (embed-certs-706500) Waiting to get IP...
	I0410 22:48:28.230257   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:28.230673   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:28.230724   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:28.230643   59155 retry.go:31] will retry after 262.296498ms: waiting for machine to come up
	I0410 22:48:28.494117   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:28.494631   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:28.494660   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:28.494584   59155 retry.go:31] will retry after 237.287095ms: waiting for machine to come up
	I0410 22:48:28.733250   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:28.733795   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:28.733817   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:28.733755   59155 retry.go:31] will retry after 387.436239ms: waiting for machine to come up
	I0410 22:48:29.123585   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:29.124128   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:29.124163   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:29.124073   59155 retry.go:31] will retry after 428.418916ms: waiting for machine to come up
	I0410 22:48:29.554781   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:29.555244   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:29.555285   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:29.555235   59155 retry.go:31] will retry after 683.194159ms: waiting for machine to come up
	I0410 22:48:30.239955   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:30.240385   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:30.240463   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:30.240365   59155 retry.go:31] will retry after 764.240086ms: waiting for machine to come up
	I0410 22:48:31.006294   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:31.006789   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:31.006816   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:31.006750   59155 retry.go:31] will retry after 1.113674235s: waiting for machine to come up
	I0410 22:48:33.358026   57719 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.946092727s)
	I0410 22:48:33.358059   57719 crio.go:469] duration metric: took 2.946222933s to extract the tarball
	I0410 22:48:33.358069   57719 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0410 22:48:33.402924   57719 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:48:33.441006   57719 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0410 22:48:33.441033   57719 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0410 22:48:33.441090   57719 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:48:33.441142   57719 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.441203   57719 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.441210   57719 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.441318   57719 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0410 22:48:33.441339   57719 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.441375   57719 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.441395   57719 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.442645   57719 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.442667   57719 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.442706   57719 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.442717   57719 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0410 22:48:33.442796   57719 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.442807   57719 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.442814   57719 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:48:33.442866   57719 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.651119   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.652634   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.665548   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.669396   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.672510   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.674137   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0410 22:48:33.686915   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.756592   57719 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0410 22:48:33.756639   57719 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.756696   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.756696   57719 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0410 22:48:33.756789   57719 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.756810   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867043   57719 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0410 22:48:33.867061   57719 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0410 22:48:33.867090   57719 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.867091   57719 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.867135   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867166   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867185   57719 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0410 22:48:33.867220   57719 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.867252   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867261   57719 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0410 22:48:33.867303   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0410 22:48:33.867311   57719 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0410 22:48:33.867355   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.867359   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0410 22:48:33.867286   57719 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0410 22:48:33.867452   57719 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.867481   57719 ssh_runner.go:195] Run: which crictl
	I0410 22:48:33.871719   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0410 22:48:33.881086   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0410 22:48:33.964827   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0410 22:48:33.964854   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0410 22:48:33.964932   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0410 22:48:33.964948   57719 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0410 22:48:33.976084   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0410 22:48:33.976155   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0410 22:48:33.976205   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0410 22:48:34.011460   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0410 22:48:34.038739   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0410 22:48:34.038739   57719 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0410 22:48:34.289751   57719 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:48:34.429542   57719 cache_images.go:92] duration metric: took 988.487885ms to LoadCachedImages
	W0410 22:48:34.429636   57719 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0410 22:48:34.429665   57719 kubeadm.go:928] updating node { 192.168.61.178 8443 v1.20.0 crio true true} ...
	I0410 22:48:34.429782   57719 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-862528 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.178
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-862528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 22:48:34.429870   57719 ssh_runner.go:195] Run: crio config
	I0410 22:48:34.478794   57719 cni.go:84] Creating CNI manager for ""
	I0410 22:48:34.478829   57719 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:48:34.478845   57719 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 22:48:34.478868   57719 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.178 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-862528 NodeName:old-k8s-version-862528 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.178"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.178 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0410 22:48:34.479065   57719 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.178
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-862528"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.178
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.178"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 22:48:34.479147   57719 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0410 22:48:34.489950   57719 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 22:48:34.490007   57719 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 22:48:34.500261   57719 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0410 22:48:34.517530   57719 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0410 22:48:34.534814   57719 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0410 22:48:34.552669   57719 ssh_runner.go:195] Run: grep 192.168.61.178	control-plane.minikube.internal$ /etc/hosts
	I0410 22:48:34.556612   57719 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.178	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:48:34.569643   57719 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:48:34.700791   57719 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:48:34.719682   57719 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528 for IP: 192.168.61.178
	I0410 22:48:34.719703   57719 certs.go:194] generating shared ca certs ...
	I0410 22:48:34.719722   57719 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:48:34.719900   57719 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 22:48:34.719951   57719 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 22:48:34.719965   57719 certs.go:256] generating profile certs ...
	I0410 22:48:34.720091   57719 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/client.key
	I0410 22:48:34.720155   57719 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.key.a46c310c
	I0410 22:48:34.720199   57719 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/proxy-client.key
	I0410 22:48:34.720337   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 22:48:34.720376   57719 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 22:48:34.720386   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 22:48:34.720438   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 22:48:34.720472   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 22:48:34.720502   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 22:48:34.720557   57719 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:48:34.721238   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 22:48:34.769810   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 22:48:34.805397   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 22:48:34.846743   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 22:48:34.888720   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0410 22:48:34.915958   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0410 22:48:34.962182   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 22:48:34.992444   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0410 22:48:35.023525   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 22:48:35.051098   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 22:48:35.077305   57719 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 22:48:35.102172   57719 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 22:48:35.121381   57719 ssh_runner.go:195] Run: openssl version
	I0410 22:48:35.127869   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 22:48:35.140056   57719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:35.145172   57719 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:35.145242   57719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:35.152081   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 22:48:35.164621   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 22:48:35.176511   57719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 22:48:35.182164   57719 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:48:35.182217   57719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 22:48:35.188968   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 22:48:35.201491   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 22:48:35.213468   57719 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 22:48:35.218519   57719 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:48:35.218586   57719 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 22:48:35.224872   57719 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 22:48:35.236964   57719 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:48:35.242262   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0410 22:48:35.249245   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0410 22:48:35.256301   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0410 22:48:35.263359   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0410 22:48:35.270166   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0410 22:48:35.276953   57719 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0410 22:48:35.283529   57719 kubeadm.go:391] StartCluster: {Name:old-k8s-version-862528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-862528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.178 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:48:35.283643   57719 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 22:48:35.283700   57719 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:48:35.328461   57719 cri.go:89] found id: ""
	I0410 22:48:35.328532   57719 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0410 22:48:35.340207   57719 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0410 22:48:35.340235   57719 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0410 22:48:35.340245   57719 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0410 22:48:35.340293   57719 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0410 22:48:35.351212   57719 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0410 22:48:35.352189   57719 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-862528" does not appear in /home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:48:35.352989   57719 kubeconfig.go:62] /home/jenkins/minikube-integration/18610-5679/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-862528" cluster setting kubeconfig missing "old-k8s-version-862528" context setting]
	I0410 22:48:35.353956   57719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/kubeconfig: {Name:mkd6f498bec10d7b0d2291f83f5e27766227bc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:48:32.122313   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:32.122773   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:32.122816   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:32.122717   59155 retry.go:31] will retry after 1.052378413s: waiting for machine to come up
	I0410 22:48:33.176207   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:33.176621   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:33.176665   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:33.176568   59155 retry.go:31] will retry after 1.548572633s: waiting for machine to come up
	I0410 22:48:34.726554   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:34.726992   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:34.727020   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:34.726938   59155 retry.go:31] will retry after 1.800911659s: waiting for machine to come up
	I0410 22:48:36.529629   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:36.530133   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:36.530164   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:36.530085   59155 retry.go:31] will retry after 2.434743044s: waiting for machine to come up
	I0410 22:48:35.428830   57719 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0410 22:48:35.479813   57719 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.178
	I0410 22:48:35.479853   57719 kubeadm.go:1154] stopping kube-system containers ...
	I0410 22:48:35.479882   57719 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0410 22:48:35.479940   57719 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:48:35.520506   57719 cri.go:89] found id: ""
	I0410 22:48:35.520577   57719 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0410 22:48:35.538167   57719 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:48:35.548571   57719 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:48:35.548600   57719 kubeadm.go:156] found existing configuration files:
	
	I0410 22:48:35.548662   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:48:35.558559   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:48:35.558612   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:48:35.568950   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:48:35.578644   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:48:35.578712   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:48:35.589075   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:48:35.600265   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:48:35.600321   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:48:35.611459   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:48:35.621712   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:48:35.621785   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:48:35.632133   57719 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:48:35.643494   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:35.775309   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:37.133286   57719 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.35793645s)
	I0410 22:48:37.133334   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:37.368687   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:37.497136   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:37.584652   57719 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:48:37.584744   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:38.085293   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:38.585489   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:39.084919   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:39.584951   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:40.085144   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:38.966866   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:38.967360   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:38.967383   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:38.967339   59155 retry.go:31] will retry after 3.219302627s: waiting for machine to come up
	I0410 22:48:40.585356   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:41.084839   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:41.585434   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:42.085797   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:42.585578   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:43.085621   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:43.585581   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:44.085251   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:44.584785   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:45.085394   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:46.409467   58701 start.go:364] duration metric: took 1m58.907071516s to acquireMachinesLock for "default-k8s-diff-port-519831"
	I0410 22:48:46.409536   58701 start.go:96] Skipping create...Using existing machine configuration
	I0410 22:48:46.409557   58701 fix.go:54] fixHost starting: 
	I0410 22:48:46.410030   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:48:46.410080   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:48:46.427877   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35587
	I0410 22:48:46.428357   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:48:46.428836   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:48:46.428858   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:48:46.429163   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:48:46.429354   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:48:46.429494   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetState
	I0410 22:48:46.431151   58701 fix.go:112] recreateIfNeeded on default-k8s-diff-port-519831: state=Stopped err=<nil>
	I0410 22:48:46.431192   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	W0410 22:48:46.431372   58701 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 22:48:46.433597   58701 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-519831" ...
	I0410 22:48:42.187835   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:42.188266   58186 main.go:141] libmachine: (embed-certs-706500) DBG | unable to find current IP address of domain embed-certs-706500 in network mk-embed-certs-706500
	I0410 22:48:42.188305   58186 main.go:141] libmachine: (embed-certs-706500) DBG | I0410 22:48:42.188191   59155 retry.go:31] will retry after 2.924293511s: waiting for machine to come up
	I0410 22:48:45.113669   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.114211   58186 main.go:141] libmachine: (embed-certs-706500) Found IP for machine: 192.168.39.10
	I0410 22:48:45.114229   58186 main.go:141] libmachine: (embed-certs-706500) Reserving static IP address...
	I0410 22:48:45.114243   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has current primary IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.114685   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "embed-certs-706500", mac: "52:54:00:36:c4:8c", ip: "192.168.39.10"} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.114711   58186 main.go:141] libmachine: (embed-certs-706500) DBG | skip adding static IP to network mk-embed-certs-706500 - found existing host DHCP lease matching {name: "embed-certs-706500", mac: "52:54:00:36:c4:8c", ip: "192.168.39.10"}
	I0410 22:48:45.114721   58186 main.go:141] libmachine: (embed-certs-706500) Reserved static IP address: 192.168.39.10
	I0410 22:48:45.114728   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Getting to WaitForSSH function...
	I0410 22:48:45.114743   58186 main.go:141] libmachine: (embed-certs-706500) Waiting for SSH to be available...
	I0410 22:48:45.116708   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.116963   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.117007   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.117139   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Using SSH client type: external
	I0410 22:48:45.117167   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Using SSH private key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa (-rw-------)
	I0410 22:48:45.117198   58186 main.go:141] libmachine: (embed-certs-706500) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0410 22:48:45.117224   58186 main.go:141] libmachine: (embed-certs-706500) DBG | About to run SSH command:
	I0410 22:48:45.117236   58186 main.go:141] libmachine: (embed-certs-706500) DBG | exit 0
	I0410 22:48:45.240518   58186 main.go:141] libmachine: (embed-certs-706500) DBG | SSH cmd err, output: <nil>: 
	I0410 22:48:45.240843   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetConfigRaw
	I0410 22:48:45.241532   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetIP
	I0410 22:48:45.243908   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.244293   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.244317   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.244576   58186 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/config.json ...
	I0410 22:48:45.244775   58186 machine.go:94] provisionDockerMachine start ...
	I0410 22:48:45.244799   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:45.245004   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.247248   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.247639   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.247665   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.247859   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:45.248039   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.248217   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.248375   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:45.248543   58186 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:45.248746   58186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0410 22:48:45.248766   58186 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 22:48:45.357146   58186 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0410 22:48:45.357177   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetMachineName
	I0410 22:48:45.357428   58186 buildroot.go:166] provisioning hostname "embed-certs-706500"
	I0410 22:48:45.357447   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetMachineName
	I0410 22:48:45.357624   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.360299   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.360700   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.360796   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.360838   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:45.361049   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.361183   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.361367   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:45.361537   58186 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:45.361702   58186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0410 22:48:45.361716   58186 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-706500 && echo "embed-certs-706500" | sudo tee /etc/hostname
	I0410 22:48:45.487121   58186 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-706500
	
	I0410 22:48:45.487160   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.490242   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.490597   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.490625   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.490805   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:45.491004   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.491204   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.491359   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:45.491576   58186 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:45.491792   58186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0410 22:48:45.491824   58186 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-706500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-706500/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-706500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:48:45.606186   58186 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:48:45.606212   58186 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:48:45.606246   58186 buildroot.go:174] setting up certificates
	I0410 22:48:45.606257   58186 provision.go:84] configureAuth start
	I0410 22:48:45.606269   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetMachineName
	I0410 22:48:45.606594   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetIP
	I0410 22:48:45.609459   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.609893   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.609932   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.610134   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.612631   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.612945   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.612979   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.613144   58186 provision.go:143] copyHostCerts
	I0410 22:48:45.613193   58186 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:48:45.613207   58186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:48:45.613262   58186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:48:45.613378   58186 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:48:45.613393   58186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:48:45.613427   58186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:48:45.613495   58186 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:48:45.613505   58186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:48:45.613529   58186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:48:45.613592   58186 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.embed-certs-706500 san=[127.0.0.1 192.168.39.10 embed-certs-706500 localhost minikube]
	I0410 22:48:45.737049   58186 provision.go:177] copyRemoteCerts
	I0410 22:48:45.737105   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:48:45.737129   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.739712   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.740060   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.740089   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.740347   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:45.740589   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.740763   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:45.740957   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:48:45.828677   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:48:45.854080   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0410 22:48:45.878704   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0410 22:48:45.902611   58186 provision.go:87] duration metric: took 296.343353ms to configureAuth
	I0410 22:48:45.902640   58186 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:48:45.902879   58186 config.go:182] Loaded profile config "embed-certs-706500": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:48:45.902962   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:45.905588   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.905950   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:45.905972   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:45.906165   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:45.906360   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.906473   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:45.906561   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:45.906725   58186 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:45.906887   58186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0410 22:48:45.906911   58186 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:48:46.172772   58186 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:48:46.172807   58186 machine.go:97] duration metric: took 928.014662ms to provisionDockerMachine
	I0410 22:48:46.172823   58186 start.go:293] postStartSetup for "embed-certs-706500" (driver="kvm2")
	I0410 22:48:46.172836   58186 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:48:46.172877   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:46.173197   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:48:46.173223   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:46.176113   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.176465   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:46.176495   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.176679   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:46.176896   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:46.177118   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:46.177328   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:48:46.260470   58186 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:48:46.265003   58186 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:48:46.265030   58186 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:48:46.265088   58186 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:48:46.265158   58186 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:48:46.265241   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:48:46.274931   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:48:46.300036   58186 start.go:296] duration metric: took 127.199834ms for postStartSetup
	I0410 22:48:46.300082   58186 fix.go:56] duration metric: took 19.322550114s for fixHost
	I0410 22:48:46.300108   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:46.302945   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.303252   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:46.303279   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.303479   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:46.303700   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:46.303861   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:46.303990   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:46.304140   58186 main.go:141] libmachine: Using SSH client type: native
	I0410 22:48:46.304308   58186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0410 22:48:46.304318   58186 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 22:48:46.409294   58186 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712789326.385898055
	
	I0410 22:48:46.409317   58186 fix.go:216] guest clock: 1712789326.385898055
	I0410 22:48:46.409327   58186 fix.go:229] Guest: 2024-04-10 22:48:46.385898055 +0000 UTC Remote: 2024-04-10 22:48:46.300087658 +0000 UTC m=+229.287947250 (delta=85.810397ms)
	I0410 22:48:46.409352   58186 fix.go:200] guest clock delta is within tolerance: 85.810397ms
	I0410 22:48:46.409360   58186 start.go:83] releasing machines lock for "embed-certs-706500", held for 19.431860062s
	I0410 22:48:46.409389   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:46.409752   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetIP
	I0410 22:48:46.412201   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.412616   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:46.412651   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.412790   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:46.413361   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:46.413559   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:48:46.413617   58186 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:48:46.413665   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:46.413796   58186 ssh_runner.go:195] Run: cat /version.json
	I0410 22:48:46.413831   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:48:46.416879   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.417224   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:46.417248   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.417268   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.417428   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:46.417630   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:46.417811   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:46.417835   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:46.417858   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:46.417938   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:48:46.418030   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:48:46.418154   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:48:46.418284   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:48:46.418463   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:48:46.529204   58186 ssh_runner.go:195] Run: systemctl --version
	I0410 22:48:46.535396   58186 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:48:46.681100   58186 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 22:48:46.687278   58186 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:48:46.687340   58186 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:48:46.703105   58186 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0410 22:48:46.703128   58186 start.go:494] detecting cgroup driver to use...
	I0410 22:48:46.703191   58186 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:48:46.719207   58186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:48:46.733444   58186 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:48:46.733509   58186 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:48:46.747369   58186 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:48:46.762231   58186 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:48:46.874897   58186 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:48:47.023672   58186 docker.go:233] disabling docker service ...
	I0410 22:48:47.023749   58186 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:48:47.038963   58186 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:48:47.053827   58186 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:48:46.435268   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Start
	I0410 22:48:46.435498   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Ensuring networks are active...
	I0410 22:48:46.436266   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Ensuring network default is active
	I0410 22:48:46.436691   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Ensuring network mk-default-k8s-diff-port-519831 is active
	I0410 22:48:46.437163   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Getting domain xml...
	I0410 22:48:46.437799   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Creating domain...
	I0410 22:48:47.206641   58186 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:48:47.363331   58186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:48:47.380657   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:48:47.402234   58186 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0410 22:48:47.402306   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.419356   58186 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:48:47.419417   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.435320   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.450812   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.462588   58186 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:48:47.474323   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.494156   58186 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.515195   58186 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:48:47.526148   58186 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:48:47.536045   58186 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0410 22:48:47.536106   58186 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0410 22:48:47.549556   58186 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:48:47.567236   58186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:48:47.702628   58186 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:48:47.848908   58186 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 22:48:47.849000   58186 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 22:48:47.854126   58186 start.go:562] Will wait 60s for crictl version
	I0410 22:48:47.854191   58186 ssh_runner.go:195] Run: which crictl
	I0410 22:48:47.858095   58186 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 22:48:47.897714   58186 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 22:48:47.897805   58186 ssh_runner.go:195] Run: crio --version
	I0410 22:48:47.927597   58186 ssh_runner.go:195] Run: crio --version
	I0410 22:48:47.958357   58186 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0410 22:48:45.584769   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:46.085396   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:46.585857   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:47.085186   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:47.585668   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:48.085585   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:48.585617   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:49.085227   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:49.585626   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:50.084900   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:47.959811   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetIP
	I0410 22:48:47.962805   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:47.963246   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:48:47.963276   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:48:47.963510   58186 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0410 22:48:47.967753   58186 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:48:47.981154   58186 kubeadm.go:877] updating cluster {Name:embed-certs-706500 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-706500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 22:48:47.981258   58186 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 22:48:47.981298   58186 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:48:48.018208   58186 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0410 22:48:48.018274   58186 ssh_runner.go:195] Run: which lz4
	I0410 22:48:48.023613   58186 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0410 22:48:48.029036   58186 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0410 22:48:48.029063   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0410 22:48:49.637729   58186 crio.go:462] duration metric: took 1.61414003s to copy over tarball
	I0410 22:48:49.637796   58186 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0410 22:48:52.046454   58186 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.408634496s)
	I0410 22:48:52.046482   58186 crio.go:469] duration metric: took 2.408728343s to extract the tarball
	I0410 22:48:52.046489   58186 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0410 22:48:47.701355   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting to get IP...
	I0410 22:48:47.702406   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:47.702994   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:47.703067   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:47.702962   59362 retry.go:31] will retry after 292.834608ms: waiting for machine to come up
	I0410 22:48:47.997294   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:47.997757   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:47.997785   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:47.997701   59362 retry.go:31] will retry after 341.35168ms: waiting for machine to come up
	I0410 22:48:48.340842   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:48.341347   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:48.341379   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:48.341279   59362 retry.go:31] will retry after 438.041848ms: waiting for machine to come up
	I0410 22:48:48.780565   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:48.781092   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:48.781116   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:48.781038   59362 retry.go:31] will retry after 557.770882ms: waiting for machine to come up
	I0410 22:48:49.340858   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:49.341330   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:49.341354   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:49.341282   59362 retry.go:31] will retry after 637.316206ms: waiting for machine to come up
	I0410 22:48:49.980256   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:49.980737   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:49.980761   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:49.980696   59362 retry.go:31] will retry after 909.873955ms: waiting for machine to come up
	I0410 22:48:50.891776   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:50.892197   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:50.892229   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:50.892147   59362 retry.go:31] will retry after 745.06949ms: waiting for machine to come up
	I0410 22:48:51.638436   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:51.638907   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:51.638933   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:51.638854   59362 retry.go:31] will retry after 1.060037191s: waiting for machine to come up
	I0410 22:48:50.585691   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:51.085669   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:51.585308   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:52.085393   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:52.585619   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:53.085643   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:53.585076   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:54.085251   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:54.585027   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:55.085629   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:52.087135   58186 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:48:52.139368   58186 crio.go:514] all images are preloaded for cri-o runtime.
	I0410 22:48:52.139389   58186 cache_images.go:84] Images are preloaded, skipping loading
	I0410 22:48:52.139397   58186 kubeadm.go:928] updating node { 192.168.39.10 8443 v1.29.3 crio true true} ...
	I0410 22:48:52.139535   58186 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-706500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-706500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 22:48:52.139629   58186 ssh_runner.go:195] Run: crio config
	I0410 22:48:52.193347   58186 cni.go:84] Creating CNI manager for ""
	I0410 22:48:52.193375   58186 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:48:52.193390   58186 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 22:48:52.193429   58186 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-706500 NodeName:embed-certs-706500 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0410 22:48:52.193606   58186 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-706500"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 22:48:52.193686   58186 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0410 22:48:52.206450   58186 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 22:48:52.206507   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 22:48:52.218898   58186 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0410 22:48:52.239285   58186 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0410 22:48:52.257083   58186 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0410 22:48:52.275448   58186 ssh_runner.go:195] Run: grep 192.168.39.10	control-plane.minikube.internal$ /etc/hosts
	I0410 22:48:52.279486   58186 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:48:52.293308   58186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:48:52.428424   58186 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:48:52.446713   58186 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500 for IP: 192.168.39.10
	I0410 22:48:52.446738   58186 certs.go:194] generating shared ca certs ...
	I0410 22:48:52.446759   58186 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:48:52.446937   58186 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 22:48:52.446980   58186 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 22:48:52.446990   58186 certs.go:256] generating profile certs ...
	I0410 22:48:52.447059   58186 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/client.key
	I0410 22:48:52.447124   58186 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/apiserver.key.f3045f1a
	I0410 22:48:52.447156   58186 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/proxy-client.key
	I0410 22:48:52.447294   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 22:48:52.447328   58186 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 22:48:52.447335   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 22:48:52.447354   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 22:48:52.447374   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 22:48:52.447405   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 22:48:52.447457   58186 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:48:52.448166   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 22:48:52.481862   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 22:48:52.530983   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 22:48:52.572191   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 22:48:52.614466   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0410 22:48:52.644331   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0410 22:48:52.672811   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 22:48:52.698376   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/embed-certs-706500/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0410 22:48:52.723998   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 22:48:52.749405   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 22:48:52.777529   58186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 22:48:52.803663   58186 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 22:48:52.822234   58186 ssh_runner.go:195] Run: openssl version
	I0410 22:48:52.830835   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 22:48:52.843425   58186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 22:48:52.848384   58186 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:48:52.848444   58186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 22:48:52.854869   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 22:48:52.867228   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 22:48:52.879319   58186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 22:48:52.884241   58186 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:48:52.884324   58186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 22:48:52.890349   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 22:48:52.902398   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 22:48:52.913996   58186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:52.918757   58186 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:52.918824   58186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:48:52.924669   58186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 22:48:52.936581   58186 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:48:52.941242   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0410 22:48:52.947526   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0410 22:48:52.953939   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0410 22:48:52.960447   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0410 22:48:52.966829   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0410 22:48:52.973148   58186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0410 22:48:52.979557   58186 kubeadm.go:391] StartCluster: {Name:embed-certs-706500 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-706500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:48:52.979669   58186 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 22:48:52.979744   58186 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:48:53.018394   58186 cri.go:89] found id: ""
	I0410 22:48:53.018479   58186 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0410 22:48:53.030088   58186 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0410 22:48:53.030112   58186 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0410 22:48:53.030118   58186 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0410 22:48:53.030184   58186 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0410 22:48:53.041035   58186 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0410 22:48:53.042312   58186 kubeconfig.go:125] found "embed-certs-706500" server: "https://192.168.39.10:8443"
	I0410 22:48:53.044306   58186 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0410 22:48:53.054911   58186 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.10
	I0410 22:48:53.054948   58186 kubeadm.go:1154] stopping kube-system containers ...
	I0410 22:48:53.054974   58186 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0410 22:48:53.055020   58186 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:48:53.093035   58186 cri.go:89] found id: ""
	I0410 22:48:53.093109   58186 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0410 22:48:53.111257   58186 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:48:53.122098   58186 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:48:53.122125   58186 kubeadm.go:156] found existing configuration files:
	
	I0410 22:48:53.122176   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:48:53.133513   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:48:53.133587   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:48:53.144275   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:48:53.154921   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:48:53.155000   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:48:53.165604   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:48:53.175520   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:48:53.175582   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:48:53.186094   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:48:53.196086   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:48:53.196156   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:48:53.206564   58186 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:48:53.217180   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:53.336883   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:54.151708   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:54.367165   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:54.457694   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:48:54.572579   58186 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:48:54.572693   58186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:55.073196   58186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:55.572865   58186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:55.595374   58186 api_server.go:72] duration metric: took 1.022777759s to wait for apiserver process to appear ...
	I0410 22:48:55.595403   58186 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:48:55.595424   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:48:52.701137   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:52.701574   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:52.701606   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:52.701529   59362 retry.go:31] will retry after 1.792719263s: waiting for machine to come up
	I0410 22:48:54.496380   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:54.496793   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:54.496823   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:54.496740   59362 retry.go:31] will retry after 2.321115222s: waiting for machine to come up
	I0410 22:48:56.819654   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:56.820107   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:56.820140   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:56.820072   59362 retry.go:31] will retry after 2.57309135s: waiting for machine to come up
	I0410 22:48:55.585506   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:56.084892   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:56.585876   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:57.085775   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:57.585260   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:58.085635   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:58.585588   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:59.085661   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:59.585663   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:00.085635   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:48:58.843447   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0410 22:48:58.843487   58186 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0410 22:48:58.843504   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:48:58.962381   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:48:58.962431   58186 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:48:59.095611   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:48:59.100754   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:48:59.100781   58186 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:48:59.595968   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:48:59.606936   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:48:59.606977   58186 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:49:00.096182   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:49:00.106346   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:49:00.106388   58186 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:49:00.595923   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:49:00.600197   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I0410 22:49:00.609220   58186 api_server.go:141] control plane version: v1.29.3
	I0410 22:49:00.609246   58186 api_server.go:131] duration metric: took 5.013835577s to wait for apiserver health ...
	I0410 22:49:00.609256   58186 cni.go:84] Creating CNI manager for ""
	I0410 22:49:00.609263   58186 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:49:00.611220   58186 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0410 22:49:00.612765   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0410 22:49:00.625567   58186 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0410 22:49:00.648581   58186 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:49:00.657652   58186 system_pods.go:59] 8 kube-system pods found
	I0410 22:49:00.657688   58186 system_pods.go:61] "coredns-76f75df574-j4kj8" [1986e6b6-e6c7-4212-bdd5-10360a0b897c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0410 22:49:00.657696   58186 system_pods.go:61] "etcd-embed-certs-706500" [acbf9245-d4f8-4fa6-88a7-4f891f9f8403] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0410 22:49:00.657704   58186 system_pods.go:61] "kube-apiserver-embed-certs-706500" [b9c79d1d-f571-4ed8-a68f-512e8a2a1705] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0410 22:49:00.657709   58186 system_pods.go:61] "kube-controller-manager-embed-certs-706500" [d229b85d-9a8d-4cd0-ac48-a6aea3769581] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0410 22:49:00.657715   58186 system_pods.go:61] "kube-proxy-8kzff" [ce35a33f-1697-44a7-ad64-83895236bc6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0410 22:49:00.657720   58186 system_pods.go:61] "kube-scheduler-embed-certs-706500" [72c68a6c-beba-48a5-937b-51c40aab0386] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0410 22:49:00.657726   58186 system_pods.go:61] "metrics-server-57f55c9bc5-4r9pl" [40a91fc1-9e0a-4bcc-a2e9-65e9f2d2b960] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:49:00.657733   58186 system_pods.go:61] "storage-provisioner" [10f7637e-e6e0-4f04-b1eb-ac3bd205064f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0410 22:49:00.657742   58186 system_pods.go:74] duration metric: took 9.141859ms to wait for pod list to return data ...
	I0410 22:49:00.657752   58186 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:49:00.662255   58186 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:49:00.662300   58186 node_conditions.go:123] node cpu capacity is 2
	I0410 22:49:00.662315   58186 node_conditions.go:105] duration metric: took 4.553643ms to run NodePressure ...
	I0410 22:49:00.662338   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:00.957923   58186 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0410 22:49:00.962553   58186 kubeadm.go:733] kubelet initialised
	I0410 22:49:00.962575   58186 kubeadm.go:734] duration metric: took 4.616848ms waiting for restarted kubelet to initialise ...
	I0410 22:49:00.962585   58186 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:49:00.968387   58186 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-j4kj8" in "kube-system" namespace to be "Ready" ...
	I0410 22:48:59.395416   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:48:59.395864   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:48:59.395893   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:48:59.395819   59362 retry.go:31] will retry after 2.378137008s: waiting for machine to come up
	I0410 22:49:01.776037   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:01.776587   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | unable to find current IP address of domain default-k8s-diff-port-519831 in network mk-default-k8s-diff-port-519831
	I0410 22:49:01.776641   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | I0410 22:49:01.776526   59362 retry.go:31] will retry after 4.360839049s: waiting for machine to come up
	I0410 22:49:00.585234   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:01.084884   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:01.585066   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:02.085697   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:02.585573   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:03.085552   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:03.585521   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:04.084919   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:04.584802   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:05.085266   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:02.975009   58186 pod_ready.go:102] pod "coredns-76f75df574-j4kj8" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:04.976854   58186 pod_ready.go:102] pod "coredns-76f75df574-j4kj8" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:06.141509   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.142008   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Found IP for machine: 192.168.72.170
	I0410 22:49:06.142037   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has current primary IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.142047   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Reserving static IP address...
	I0410 22:49:06.142422   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Reserved static IP address: 192.168.72.170
	I0410 22:49:06.142451   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Waiting for SSH to be available...
	I0410 22:49:06.142476   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-519831", mac: "52:54:00:dc:67:d5", ip: "192.168.72.170"} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.142499   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | skip adding static IP to network mk-default-k8s-diff-port-519831 - found existing host DHCP lease matching {name: "default-k8s-diff-port-519831", mac: "52:54:00:dc:67:d5", ip: "192.168.72.170"}
	I0410 22:49:06.142518   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Getting to WaitForSSH function...
	I0410 22:49:06.144878   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.145206   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.145238   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.145326   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Using SSH client type: external
	I0410 22:49:06.145365   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Using SSH private key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa (-rw-------)
	I0410 22:49:06.145401   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.170 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0410 22:49:06.145421   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | About to run SSH command:
	I0410 22:49:06.145438   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | exit 0
	I0410 22:49:06.272546   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | SSH cmd err, output: <nil>: 
	I0410 22:49:06.272919   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetConfigRaw
	I0410 22:49:06.273605   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetIP
	I0410 22:49:06.276234   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.276610   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.276644   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.276851   58701 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/config.json ...
	I0410 22:49:06.277100   58701 machine.go:94] provisionDockerMachine start ...
	I0410 22:49:06.277127   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:06.277400   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:06.279729   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.280107   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.280146   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.280295   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:06.280480   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.280658   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.280794   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:06.280939   58701 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:06.281121   58701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0410 22:49:06.281138   58701 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 22:49:06.385219   58701 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0410 22:49:06.385254   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetMachineName
	I0410 22:49:06.385498   58701 buildroot.go:166] provisioning hostname "default-k8s-diff-port-519831"
	I0410 22:49:06.385527   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetMachineName
	I0410 22:49:06.385716   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:06.388422   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.388922   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.388963   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.389072   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:06.389292   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.389462   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.389600   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:06.389751   58701 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:06.389924   58701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0410 22:49:06.389938   58701 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-519831 && echo "default-k8s-diff-port-519831" | sudo tee /etc/hostname
	I0410 22:49:06.507221   58701 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-519831
	
	I0410 22:49:06.507252   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:06.509837   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.510179   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.510225   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.510385   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:06.510561   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.510736   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.510880   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:06.511040   58701 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:06.511236   58701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0410 22:49:06.511262   58701 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-519831' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-519831/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-519831' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:49:06.626097   58701 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:49:06.626129   58701 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:49:06.626153   58701 buildroot.go:174] setting up certificates
	I0410 22:49:06.626163   58701 provision.go:84] configureAuth start
	I0410 22:49:06.626173   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetMachineName
	I0410 22:49:06.626499   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetIP
	I0410 22:49:06.629067   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.629412   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.629450   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.629559   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:06.632132   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.632517   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.632548   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.632674   58701 provision.go:143] copyHostCerts
	I0410 22:49:06.632734   58701 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:49:06.632755   58701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:49:06.632822   58701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:49:06.633021   58701 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:49:06.633037   58701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:49:06.633078   58701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:49:06.633179   58701 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:49:06.633191   58701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:49:06.633223   58701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:49:06.633295   58701 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-519831 san=[127.0.0.1 192.168.72.170 default-k8s-diff-port-519831 localhost minikube]
	I0410 22:49:06.835016   58701 provision.go:177] copyRemoteCerts
	I0410 22:49:06.835077   58701 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:49:06.835104   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:06.837769   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.838124   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:06.838152   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:06.838327   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:06.838519   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:06.838669   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:06.838808   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:06.921929   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:49:06.947855   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0410 22:49:06.972865   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0410 22:49:06.999630   58701 provision.go:87] duration metric: took 373.45654ms to configureAuth
	I0410 22:49:06.999658   58701 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:49:06.999872   58701 config.go:182] Loaded profile config "default-k8s-diff-port-519831": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:49:06.999942   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:07.003015   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.003418   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.003452   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.003623   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:07.003793   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.003946   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.004062   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:07.004208   58701 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:07.004425   58701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0410 22:49:07.004448   58701 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:49:07.273568   58701 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:49:07.273601   58701 machine.go:97] duration metric: took 996.483382ms to provisionDockerMachine
	I0410 22:49:07.273618   58701 start.go:293] postStartSetup for "default-k8s-diff-port-519831" (driver="kvm2")
	I0410 22:49:07.273634   58701 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:49:07.273660   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:07.274009   58701 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:49:07.274040   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:07.276736   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.277132   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.277155   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.277354   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:07.277537   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.277740   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:07.277891   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:07.361056   58701 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:49:07.365729   58701 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:49:07.365759   58701 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:49:07.365834   58701 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:49:07.365935   58701 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:49:07.366064   58701 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:49:07.376754   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:49:07.509384   57270 start.go:364] duration metric: took 56.035567079s to acquireMachinesLock for "no-preload-646133"
	I0410 22:49:07.509424   57270 start.go:96] Skipping create...Using existing machine configuration
	I0410 22:49:07.509432   57270 fix.go:54] fixHost starting: 
	I0410 22:49:07.509837   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:07.509872   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:07.526882   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46721
	I0410 22:49:07.527337   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:07.527780   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:49:07.527801   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:07.528077   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:07.528238   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:07.528366   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetState
	I0410 22:49:07.529732   57270 fix.go:112] recreateIfNeeded on no-preload-646133: state=Stopped err=<nil>
	I0410 22:49:07.529755   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	W0410 22:49:07.529878   57270 fix.go:138] unexpected machine state, will restart: <nil>
	I0410 22:49:07.531875   57270 out.go:177] * Restarting existing kvm2 VM for "no-preload-646133" ...
	I0410 22:49:07.402691   58701 start.go:296] duration metric: took 129.059293ms for postStartSetup
	I0410 22:49:07.402731   58701 fix.go:56] duration metric: took 20.99318672s for fixHost
	I0410 22:49:07.402751   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:07.405634   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.405955   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.405996   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.406161   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:07.406378   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.406537   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.406647   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:07.406826   58701 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:07.407062   58701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0410 22:49:07.407079   58701 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 22:49:07.509210   58701 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712789347.471050157
	
	I0410 22:49:07.509233   58701 fix.go:216] guest clock: 1712789347.471050157
	I0410 22:49:07.509241   58701 fix.go:229] Guest: 2024-04-10 22:49:07.471050157 +0000 UTC Remote: 2024-04-10 22:49:07.402735415 +0000 UTC m=+140.054227768 (delta=68.314742ms)
	I0410 22:49:07.509287   58701 fix.go:200] guest clock delta is within tolerance: 68.314742ms
	I0410 22:49:07.509297   58701 start.go:83] releasing machines lock for "default-k8s-diff-port-519831", held for 21.099785205s
	I0410 22:49:07.509328   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:07.509613   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetIP
	I0410 22:49:07.512255   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.512634   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.512667   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.512827   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:07.513364   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:07.513531   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:07.513610   58701 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:49:07.513649   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:07.513750   58701 ssh_runner.go:195] Run: cat /version.json
	I0410 22:49:07.513771   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:07.516338   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.516685   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.516776   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.516802   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.516951   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:07.517142   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:07.517161   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.517173   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:07.517310   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:07.517355   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:07.517470   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:07.517602   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:07.517604   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:07.517765   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:07.594218   58701 ssh_runner.go:195] Run: systemctl --version
	I0410 22:49:07.633783   58701 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:49:07.790430   58701 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 22:49:07.797279   58701 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:49:07.797358   58701 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:49:07.815457   58701 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0410 22:49:07.815488   58701 start.go:494] detecting cgroup driver to use...
	I0410 22:49:07.815561   58701 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:49:07.833038   58701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:49:07.848577   58701 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:49:07.848648   58701 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:49:07.863609   58701 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:49:07.878299   58701 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:49:07.999388   58701 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:49:08.155534   58701 docker.go:233] disabling docker service ...
	I0410 22:49:08.155613   58701 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:49:08.175545   58701 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:49:08.195923   58701 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:49:08.340282   58701 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:49:08.485647   58701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:49:08.500245   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:49:08.520493   58701 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0410 22:49:08.520582   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.535455   58701 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:49:08.535521   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.547058   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.559638   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.571374   58701 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:49:08.583796   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.598091   58701 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.622634   58701 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:08.633858   58701 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:49:08.645114   58701 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0410 22:49:08.645167   58701 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0410 22:49:08.660204   58701 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:49:08.671345   58701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:49:08.804523   58701 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:49:08.953644   58701 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 22:49:08.953717   58701 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 22:49:08.958661   58701 start.go:562] Will wait 60s for crictl version
	I0410 22:49:08.958715   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:49:08.962938   58701 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 22:49:09.006335   58701 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 22:49:09.006425   58701 ssh_runner.go:195] Run: crio --version
	I0410 22:49:09.037315   58701 ssh_runner.go:195] Run: crio --version
	I0410 22:49:09.069366   58701 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0410 22:49:07.533174   57270 main.go:141] libmachine: (no-preload-646133) Calling .Start
	I0410 22:49:07.533352   57270 main.go:141] libmachine: (no-preload-646133) Ensuring networks are active...
	I0410 22:49:07.534117   57270 main.go:141] libmachine: (no-preload-646133) Ensuring network default is active
	I0410 22:49:07.534413   57270 main.go:141] libmachine: (no-preload-646133) Ensuring network mk-no-preload-646133 is active
	I0410 22:49:07.534851   57270 main.go:141] libmachine: (no-preload-646133) Getting domain xml...
	I0410 22:49:07.535553   57270 main.go:141] libmachine: (no-preload-646133) Creating domain...
	I0410 22:49:08.844990   57270 main.go:141] libmachine: (no-preload-646133) Waiting to get IP...
	I0410 22:49:08.845908   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:08.846363   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:08.846459   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:08.846332   59513 retry.go:31] will retry after 241.150391ms: waiting for machine to come up
	I0410 22:49:09.088961   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:09.089455   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:09.089489   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:09.089417   59513 retry.go:31] will retry after 349.96397ms: waiting for machine to come up
	I0410 22:49:09.441226   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:09.441799   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:09.441828   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:09.441754   59513 retry.go:31] will retry after 444.576999ms: waiting for machine to come up
	I0410 22:49:05.585408   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:06.085250   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:06.585503   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:07.085422   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:07.584909   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:08.084863   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:08.585859   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:09.085175   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:09.585660   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:10.085221   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:07.475385   58186 pod_ready.go:92] pod "coredns-76f75df574-j4kj8" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:07.475414   58186 pod_ready.go:81] duration metric: took 6.506993581s for pod "coredns-76f75df574-j4kj8" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:07.475424   58186 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:09.486133   58186 pod_ready.go:102] pod "etcd-embed-certs-706500" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:11.483972   58186 pod_ready.go:92] pod "etcd-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:11.483994   58186 pod_ready.go:81] duration metric: took 4.008564427s for pod "etcd-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.484005   58186 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.490340   58186 pod_ready.go:92] pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:11.490380   58186 pod_ready.go:81] duration metric: took 6.362017ms for pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.490399   58186 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.497078   58186 pod_ready.go:92] pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:11.497110   58186 pod_ready.go:81] duration metric: took 6.701645ms for pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.497124   58186 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8kzff" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.504091   58186 pod_ready.go:92] pod "kube-proxy-8kzff" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:11.504118   58186 pod_ready.go:81] duration metric: took 6.985136ms for pod "kube-proxy-8kzff" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.504132   58186 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.510619   58186 pod_ready.go:92] pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:11.510656   58186 pod_ready.go:81] duration metric: took 6.513031ms for pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:11.510674   58186 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:09.070592   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetIP
	I0410 22:49:09.073850   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:09.074163   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:09.074190   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:09.074388   58701 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0410 22:49:09.079170   58701 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:49:09.093764   58701 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-519831 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-519831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.170 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 22:49:09.093973   58701 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 22:49:09.094040   58701 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:49:09.140874   58701 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0410 22:49:09.140951   58701 ssh_runner.go:195] Run: which lz4
	I0410 22:49:09.146775   58701 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0410 22:49:09.152876   58701 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0410 22:49:09.152917   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0410 22:49:10.827934   58701 crio.go:462] duration metric: took 1.681191787s to copy over tarball
	I0410 22:49:10.828019   58701 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0410 22:49:09.888688   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:09.892576   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:09.892607   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:09.889179   59513 retry.go:31] will retry after 560.585608ms: waiting for machine to come up
	I0410 22:49:10.451001   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:10.451630   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:10.451663   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:10.451590   59513 retry.go:31] will retry after 601.519186ms: waiting for machine to come up
	I0410 22:49:11.054324   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:11.054664   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:11.054693   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:11.054653   59513 retry.go:31] will retry after 750.183717ms: waiting for machine to come up
	I0410 22:49:11.805908   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:11.806303   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:11.806331   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:11.806254   59513 retry.go:31] will retry after 883.805148ms: waiting for machine to come up
	I0410 22:49:12.691316   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:12.691861   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:12.691893   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:12.691804   59513 retry.go:31] will retry after 1.39605629s: waiting for machine to come up
	I0410 22:49:14.090350   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:14.090795   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:14.090821   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:14.090753   59513 retry.go:31] will retry after 1.388324423s: waiting for machine to come up
	I0410 22:49:10.585333   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:11.085466   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:11.585062   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:12.085191   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:12.585644   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:13.085615   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:13.585355   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:14.085270   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:14.584868   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:15.085639   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:13.521844   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:16.041569   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:13.328492   58701 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.500439721s)
	I0410 22:49:13.328534   58701 crio.go:469] duration metric: took 2.500564923s to extract the tarball
	I0410 22:49:13.328545   58701 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0410 22:49:13.367568   58701 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:49:13.415759   58701 crio.go:514] all images are preloaded for cri-o runtime.
	I0410 22:49:13.415780   58701 cache_images.go:84] Images are preloaded, skipping loading
	I0410 22:49:13.415788   58701 kubeadm.go:928] updating node { 192.168.72.170 8444 v1.29.3 crio true true} ...
	I0410 22:49:13.415899   58701 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-519831 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-519831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 22:49:13.415982   58701 ssh_runner.go:195] Run: crio config
	I0410 22:49:13.473019   58701 cni.go:84] Creating CNI manager for ""
	I0410 22:49:13.473046   58701 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:49:13.473063   58701 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 22:49:13.473100   58701 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.170 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-519831 NodeName:default-k8s-diff-port-519831 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0410 22:49:13.473261   58701 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.170
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-519831"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.170
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.170"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 22:49:13.473325   58701 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0410 22:49:13.487302   58701 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 22:49:13.487368   58701 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 22:49:13.498496   58701 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0410 22:49:13.518312   58701 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0410 22:49:13.537972   58701 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0410 22:49:13.558714   58701 ssh_runner.go:195] Run: grep 192.168.72.170	control-plane.minikube.internal$ /etc/hosts
	I0410 22:49:13.562886   58701 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.170	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:49:13.575957   58701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:49:13.706316   58701 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:49:13.725898   58701 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831 for IP: 192.168.72.170
	I0410 22:49:13.725924   58701 certs.go:194] generating shared ca certs ...
	I0410 22:49:13.725944   58701 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:49:13.726119   58701 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 22:49:13.726173   58701 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 22:49:13.726185   58701 certs.go:256] generating profile certs ...
	I0410 22:49:13.726297   58701 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/client.key
	I0410 22:49:13.726398   58701 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/apiserver.key.ff579077
	I0410 22:49:13.726454   58701 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/proxy-client.key
	I0410 22:49:13.726606   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 22:49:13.726644   58701 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 22:49:13.726656   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 22:49:13.726685   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 22:49:13.726725   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 22:49:13.726756   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 22:49:13.726811   58701 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:49:13.727747   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 22:49:13.780060   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 22:49:13.818446   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 22:49:13.865986   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 22:49:13.897578   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0410 22:49:13.937123   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0410 22:49:13.970558   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 22:49:13.997678   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/default-k8s-diff-port-519831/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0410 22:49:14.025173   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 22:49:14.051190   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 22:49:14.079109   58701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 22:49:14.107547   58701 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 22:49:14.128029   58701 ssh_runner.go:195] Run: openssl version
	I0410 22:49:14.134686   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 22:49:14.148733   58701 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:14.154057   58701 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:14.154114   58701 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:14.160626   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 22:49:14.174406   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 22:49:14.187513   58701 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 22:49:14.193279   58701 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:49:14.193344   58701 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 22:49:14.199518   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 22:49:14.213538   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 22:49:14.225618   58701 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 22:49:14.230610   58701 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:49:14.230666   58701 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 22:49:14.236756   58701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 22:49:14.250041   58701 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:49:14.255320   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0410 22:49:14.262821   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0410 22:49:14.268854   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0410 22:49:14.275152   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0410 22:49:14.281598   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0410 22:49:14.287895   58701 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0410 22:49:14.294125   58701 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-519831 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-519831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.170 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:49:14.294246   58701 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 22:49:14.294301   58701 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:49:14.332192   58701 cri.go:89] found id: ""
	I0410 22:49:14.332268   58701 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0410 22:49:14.343174   58701 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0410 22:49:14.343198   58701 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0410 22:49:14.343205   58701 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0410 22:49:14.343261   58701 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0410 22:49:14.355648   58701 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0410 22:49:14.357310   58701 kubeconfig.go:125] found "default-k8s-diff-port-519831" server: "https://192.168.72.170:8444"
	I0410 22:49:14.360713   58701 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0410 22:49:14.371972   58701 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.170
	I0410 22:49:14.372011   58701 kubeadm.go:1154] stopping kube-system containers ...
	I0410 22:49:14.372025   58701 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0410 22:49:14.372083   58701 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:49:14.410517   58701 cri.go:89] found id: ""
	I0410 22:49:14.410594   58701 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0410 22:49:14.428686   58701 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:49:14.443256   58701 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:49:14.443281   58701 kubeadm.go:156] found existing configuration files:
	
	I0410 22:49:14.443353   58701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0410 22:49:14.455086   58701 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:49:14.455156   58701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:49:14.466151   58701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0410 22:49:14.476799   58701 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:49:14.476852   58701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:49:14.487588   58701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0410 22:49:14.498476   58701 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:49:14.498534   58701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:49:14.509248   58701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0410 22:49:14.520223   58701 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:49:14.520287   58701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:49:14.531388   58701 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:49:14.542775   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:14.673733   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:15.773338   58701 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.099570437s)
	I0410 22:49:15.773385   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:15.985355   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:16.052996   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:16.126251   58701 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:49:16.126362   58701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:16.626615   58701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:17.127289   58701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:17.166269   58701 api_server.go:72] duration metric: took 1.040013076s to wait for apiserver process to appear ...
	I0410 22:49:17.166315   58701 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:49:17.166339   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:17.166964   58701 api_server.go:269] stopped: https://192.168.72.170:8444/healthz: Get "https://192.168.72.170:8444/healthz": dial tcp 192.168.72.170:8444: connect: connection refused
	I0410 22:49:15.480947   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:15.481358   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:15.481386   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:15.481309   59513 retry.go:31] will retry after 2.276682979s: waiting for machine to come up
	I0410 22:49:17.759404   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:17.759931   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:17.759975   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:17.759887   59513 retry.go:31] will retry after 2.254373826s: waiting for machine to come up
	I0410 22:49:15.585476   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:16.085404   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:16.585123   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:17.085713   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:17.584877   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:18.085601   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:18.585222   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:19.084891   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:19.585215   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:20.085668   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:18.519156   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:20.520053   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:17.667248   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:20.709507   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0410 22:49:20.709538   58701 api_server.go:103] status: https://192.168.72.170:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0410 22:49:20.709554   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:20.740392   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:49:20.740483   58701 api_server.go:103] status: https://192.168.72.170:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:49:21.166658   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:21.174343   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:49:21.174378   58701 api_server.go:103] status: https://192.168.72.170:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:49:21.667345   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:21.685078   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0410 22:49:21.685112   58701 api_server.go:103] status: https://192.168.72.170:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0410 22:49:22.166644   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:49:22.171611   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 200:
	ok
	I0410 22:49:22.178452   58701 api_server.go:141] control plane version: v1.29.3
	I0410 22:49:22.178484   58701 api_server.go:131] duration metric: took 5.012161431s to wait for apiserver health ...
	I0410 22:49:22.178493   58701 cni.go:84] Creating CNI manager for ""
	I0410 22:49:22.178499   58701 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:49:22.180370   58701 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0410 22:49:22.181768   58701 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0410 22:49:22.197462   58701 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0410 22:49:22.218348   58701 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:49:22.236800   58701 system_pods.go:59] 8 kube-system pods found
	I0410 22:49:22.236830   58701 system_pods.go:61] "coredns-76f75df574-ghnvx" [88ebd9b0-ecf0-4037-b5b0-547dad2354ba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0410 22:49:22.236837   58701 system_pods.go:61] "etcd-default-k8s-diff-port-519831" [e03c3075-b377-41a8-8f68-5a424fafd6a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0410 22:49:22.236843   58701 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-519831" [a538137b-08ce-4feb-a420-2e3ad7125b14] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0410 22:49:22.236849   58701 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-519831" [05d154e2-69cf-40dc-a4ba-ed0d65be4365] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0410 22:49:22.236861   58701 system_pods.go:61] "kube-proxy-5mbwx" [44724487-9539-4079-9fd6-40cb70208b95] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0410 22:49:22.236866   58701 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-519831" [a587ef20-140c-40b0-b306-d5f5c595f4a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0410 22:49:22.236871   58701 system_pods.go:61] "metrics-server-57f55c9bc5-9l2hc" [2f5cda2f-4d8f-4798-954e-5ef588f2b88f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:49:22.236876   58701 system_pods.go:61] "storage-provisioner" [e4e09f42-54ba-480e-a020-1ca071a54558] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0410 22:49:22.236884   58701 system_pods.go:74] duration metric: took 18.510987ms to wait for pod list to return data ...
	I0410 22:49:22.236893   58701 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:49:22.242143   58701 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:49:22.242167   58701 node_conditions.go:123] node cpu capacity is 2
	I0410 22:49:22.242177   58701 node_conditions.go:105] duration metric: took 5.279415ms to run NodePressure ...
	I0410 22:49:22.242192   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:22.532741   58701 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0410 22:49:22.537418   58701 kubeadm.go:733] kubelet initialised
	I0410 22:49:22.537444   58701 kubeadm.go:734] duration metric: took 4.675489ms waiting for restarted kubelet to initialise ...
	I0410 22:49:22.537453   58701 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:49:22.543364   58701 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-ghnvx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:22.549161   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "coredns-76f75df574-ghnvx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.549186   58701 pod_ready.go:81] duration metric: took 5.796619ms for pod "coredns-76f75df574-ghnvx" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:22.549196   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "coredns-76f75df574-ghnvx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.549207   58701 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:22.554131   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.554156   58701 pod_ready.go:81] duration metric: took 4.941026ms for pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:22.554165   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.554172   58701 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:22.558783   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.558812   58701 pod_ready.go:81] duration metric: took 4.633262ms for pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:22.558822   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.558828   58701 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:22.622314   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.622344   58701 pod_ready.go:81] duration metric: took 63.505681ms for pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:22.622356   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:22.622370   58701 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5mbwx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:23.022239   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "kube-proxy-5mbwx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.022266   58701 pod_ready.go:81] duration metric: took 399.888837ms for pod "kube-proxy-5mbwx" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:23.022275   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "kube-proxy-5mbwx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.022286   58701 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:23.422213   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.422245   58701 pod_ready.go:81] duration metric: took 399.950443ms for pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:23.422257   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.422270   58701 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:23.823832   58701 pod_ready.go:97] node "default-k8s-diff-port-519831" hosting pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.823858   58701 pod_ready.go:81] duration metric: took 401.581123ms for pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace to be "Ready" ...
	E0410 22:49:23.823868   58701 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-519831" hosting pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:23.823875   58701 pod_ready.go:38] duration metric: took 1.286413141s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:49:23.823889   58701 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0410 22:49:23.840663   58701 ops.go:34] apiserver oom_adj: -16
	I0410 22:49:23.840691   58701 kubeadm.go:591] duration metric: took 9.497479077s to restartPrimaryControlPlane
	I0410 22:49:23.840702   58701 kubeadm.go:393] duration metric: took 9.546582608s to StartCluster
	I0410 22:49:23.840718   58701 settings.go:142] acquiring lock: {Name:mk5dc8e9a07a91433645b19ffba859d70c73be71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:49:23.840795   58701 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:49:23.843350   58701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/kubeconfig: {Name:mkd6f498bec10d7b0d2291f83f5e27766227bc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:49:23.843613   58701 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.170 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0410 22:49:23.845385   58701 out.go:177] * Verifying Kubernetes components...
	I0410 22:49:23.843685   58701 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0410 22:49:23.846686   58701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:49:23.845421   58701 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-519831"
	I0410 22:49:23.846834   58701 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-519831"
	I0410 22:49:23.843826   58701 config.go:182] Loaded profile config "default-k8s-diff-port-519831": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	W0410 22:49:23.846852   58701 addons.go:243] addon storage-provisioner should already be in state true
	I0410 22:49:23.846901   58701 host.go:66] Checking if "default-k8s-diff-port-519831" exists ...
	I0410 22:49:23.845429   58701 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-519831"
	I0410 22:49:23.846969   58701 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-519831"
	I0410 22:49:23.845433   58701 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-519831"
	I0410 22:49:23.847069   58701 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-519831"
	W0410 22:49:23.847088   58701 addons.go:243] addon metrics-server should already be in state true
	I0410 22:49:23.847122   58701 host.go:66] Checking if "default-k8s-diff-port-519831" exists ...
	I0410 22:49:23.847349   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.847358   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.847381   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.847384   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.847495   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.847532   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.863090   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33089
	I0410 22:49:23.863240   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33493
	I0410 22:49:23.863685   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.863793   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.864315   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.864333   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.864356   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.864371   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.864741   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.864749   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.864949   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetState
	I0410 22:49:23.865210   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.865258   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.867599   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41111
	I0410 22:49:23.868035   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.868627   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.868652   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.868739   58701 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-519831"
	W0410 22:49:23.868757   58701 addons.go:243] addon default-storageclass should already be in state true
	I0410 22:49:23.868785   58701 host.go:66] Checking if "default-k8s-diff-port-519831" exists ...
	I0410 22:49:23.869023   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.869094   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.869136   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.869562   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.869630   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.881589   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41925
	I0410 22:49:23.881997   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.882429   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.882442   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.882719   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.882914   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetState
	I0410 22:49:23.884708   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:23.886865   58701 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0410 22:49:23.886946   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37555
	I0410 22:49:23.888493   58701 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0410 22:49:23.888511   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0410 22:49:23.888532   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:23.888850   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.889129   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39287
	I0410 22:49:23.889513   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.889536   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.889601   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.890020   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.890265   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.890285   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.890308   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetState
	I0410 22:49:23.890667   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.891458   58701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:49:23.891496   58701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:49:23.892090   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.892232   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:23.894143   58701 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:20.015689   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:20.016192   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:20.016230   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:20.016163   59513 retry.go:31] will retry after 2.611766259s: waiting for machine to come up
	I0410 22:49:22.629270   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:22.629704   57270 main.go:141] libmachine: (no-preload-646133) DBG | unable to find current IP address of domain no-preload-646133 in network mk-no-preload-646133
	I0410 22:49:22.629731   57270 main.go:141] libmachine: (no-preload-646133) DBG | I0410 22:49:22.629644   59513 retry.go:31] will retry after 3.270808972s: waiting for machine to come up
	I0410 22:49:23.892695   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:23.892720   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:23.895489   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.895599   58701 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:49:23.895609   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0410 22:49:23.895623   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:23.896367   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:23.896558   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:23.896754   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:23.898964   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.899320   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:23.899355   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.899535   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:23.899715   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:23.899855   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:23.899999   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:23.910046   58701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35713
	I0410 22:49:23.910471   58701 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:49:23.911056   58701 main.go:141] libmachine: Using API Version  1
	I0410 22:49:23.911077   58701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:49:23.911445   58701 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:49:23.911653   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetState
	I0410 22:49:23.913330   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .DriverName
	I0410 22:49:23.913603   58701 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0410 22:49:23.913619   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0410 22:49:23.913637   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHHostname
	I0410 22:49:23.916303   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.916759   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:67:d5", ip: ""} in network mk-default-k8s-diff-port-519831: {Iface:virbr4 ExpiryTime:2024-04-10 23:48:58 +0000 UTC Type:0 Mac:52:54:00:dc:67:d5 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:default-k8s-diff-port-519831 Clientid:01:52:54:00:dc:67:d5}
	I0410 22:49:23.916820   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | domain default-k8s-diff-port-519831 has defined IP address 192.168.72.170 and MAC address 52:54:00:dc:67:d5 in network mk-default-k8s-diff-port-519831
	I0410 22:49:23.916923   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHPort
	I0410 22:49:23.917137   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHKeyPath
	I0410 22:49:23.917377   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .GetSSHUsername
	I0410 22:49:23.917517   58701 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/default-k8s-diff-port-519831/id_rsa Username:docker}
	I0410 22:49:24.067636   58701 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:49:24.087396   58701 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-519831" to be "Ready" ...
	I0410 22:49:24.204429   58701 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0410 22:49:24.204457   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0410 22:49:24.213319   58701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0410 22:49:24.224083   58701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:49:24.234156   58701 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0410 22:49:24.234182   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0410 22:49:24.273950   58701 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:49:24.273980   58701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0410 22:49:24.295822   58701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:49:24.580460   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:24.580498   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:24.580835   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:24.580853   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:24.580864   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:24.580872   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:24.580872   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Closing plugin on server side
	I0410 22:49:24.581102   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:24.581126   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:24.589648   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:24.589714   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:24.589981   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Closing plugin on server side
	I0410 22:49:24.590040   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:24.590062   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:25.339438   58701 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.043578779s)
	I0410 22:49:25.339489   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:25.339499   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:25.339451   58701 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.115333809s)
	I0410 22:49:25.339560   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:25.339593   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:25.339872   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:25.339897   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:25.339911   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:25.339924   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:25.339944   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Closing plugin on server side
	I0410 22:49:25.339956   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:25.339984   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:25.340004   58701 main.go:141] libmachine: Making call to close driver server
	I0410 22:49:25.340015   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) Calling .Close
	I0410 22:49:25.340149   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:25.340185   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:25.340203   58701 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-519831"
	I0410 22:49:25.341481   58701 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:49:25.341497   58701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:49:25.344575   58701 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0410 22:49:20.585629   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:21.084898   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:21.585346   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:22.085672   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:22.585768   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:23.085613   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:23.585507   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:24.085104   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:24.585745   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:25.084858   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:23.017917   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:25.018591   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:27.019206   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:25.341622   58701 main.go:141] libmachine: (default-k8s-diff-port-519831) DBG | Closing plugin on server side
	I0410 22:49:25.345974   58701 addons.go:505] duration metric: took 1.502302613s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0410 22:49:26.094458   58701 node_ready.go:53] node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:25.904062   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:25.904580   57270 main.go:141] libmachine: (no-preload-646133) Found IP for machine: 192.168.50.17
	I0410 22:49:25.904608   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has current primary IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:25.904622   57270 main.go:141] libmachine: (no-preload-646133) Reserving static IP address...
	I0410 22:49:25.905076   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "no-preload-646133", mac: "52:54:00:35:62:0e", ip: "192.168.50.17"} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:25.905117   57270 main.go:141] libmachine: (no-preload-646133) DBG | skip adding static IP to network mk-no-preload-646133 - found existing host DHCP lease matching {name: "no-preload-646133", mac: "52:54:00:35:62:0e", ip: "192.168.50.17"}
	I0410 22:49:25.905134   57270 main.go:141] libmachine: (no-preload-646133) Reserved static IP address: 192.168.50.17
	I0410 22:49:25.905151   57270 main.go:141] libmachine: (no-preload-646133) Waiting for SSH to be available...
	I0410 22:49:25.905170   57270 main.go:141] libmachine: (no-preload-646133) DBG | Getting to WaitForSSH function...
	I0410 22:49:25.907397   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:25.907773   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:25.907796   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:25.907937   57270 main.go:141] libmachine: (no-preload-646133) DBG | Using SSH client type: external
	I0410 22:49:25.907960   57270 main.go:141] libmachine: (no-preload-646133) DBG | Using SSH private key: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa (-rw-------)
	I0410 22:49:25.907979   57270 main.go:141] libmachine: (no-preload-646133) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.17 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0410 22:49:25.907989   57270 main.go:141] libmachine: (no-preload-646133) DBG | About to run SSH command:
	I0410 22:49:25.907997   57270 main.go:141] libmachine: (no-preload-646133) DBG | exit 0
	I0410 22:49:26.032683   57270 main.go:141] libmachine: (no-preload-646133) DBG | SSH cmd err, output: <nil>: 
	I0410 22:49:26.033065   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetConfigRaw
	I0410 22:49:26.033761   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetIP
	I0410 22:49:26.036545   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.036951   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.036982   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.037187   57270 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/config.json ...
	I0410 22:49:26.037403   57270 machine.go:94] provisionDockerMachine start ...
	I0410 22:49:26.037424   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:26.037655   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.039750   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.040081   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.040102   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.040285   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:26.040486   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.040657   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.040818   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:26.040972   57270 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:26.041180   57270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0410 22:49:26.041197   57270 main.go:141] libmachine: About to run SSH command:
	hostname
	I0410 22:49:26.149298   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0410 22:49:26.149335   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetMachineName
	I0410 22:49:26.149618   57270 buildroot.go:166] provisioning hostname "no-preload-646133"
	I0410 22:49:26.149647   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetMachineName
	I0410 22:49:26.149849   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.152432   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.152799   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.152829   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.152973   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:26.153233   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.153406   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.153571   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:26.153774   57270 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:26.153992   57270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0410 22:49:26.154010   57270 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-646133 && echo "no-preload-646133" | sudo tee /etc/hostname
	I0410 22:49:26.283760   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-646133
	
	I0410 22:49:26.283794   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.286605   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.286925   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.286955   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.287097   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:26.287277   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.287425   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.287551   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:26.287725   57270 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:26.287944   57270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0410 22:49:26.287969   57270 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-646133' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-646133/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-646133' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0410 22:49:26.402869   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0410 22:49:26.402905   57270 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18610-5679/.minikube CaCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18610-5679/.minikube}
	I0410 22:49:26.402945   57270 buildroot.go:174] setting up certificates
	I0410 22:49:26.402956   57270 provision.go:84] configureAuth start
	I0410 22:49:26.402973   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetMachineName
	I0410 22:49:26.403234   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetIP
	I0410 22:49:26.405718   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.406079   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.406119   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.406357   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.408549   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.408882   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.408917   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.409034   57270 provision.go:143] copyHostCerts
	I0410 22:49:26.409106   57270 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem, removing ...
	I0410 22:49:26.409124   57270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem
	I0410 22:49:26.409177   57270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/ca.pem (1082 bytes)
	I0410 22:49:26.409310   57270 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem, removing ...
	I0410 22:49:26.409320   57270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem
	I0410 22:49:26.409341   57270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/cert.pem (1123 bytes)
	I0410 22:49:26.409405   57270 exec_runner.go:144] found /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem, removing ...
	I0410 22:49:26.409412   57270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem
	I0410 22:49:26.409430   57270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18610-5679/.minikube/key.pem (1679 bytes)
	I0410 22:49:26.409476   57270 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem org=jenkins.no-preload-646133 san=[127.0.0.1 192.168.50.17 localhost minikube no-preload-646133]
	I0410 22:49:26.567556   57270 provision.go:177] copyRemoteCerts
	I0410 22:49:26.567611   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0410 22:49:26.567647   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.570205   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.570589   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.570614   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.570805   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:26.571034   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.571172   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:26.571294   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:49:26.655943   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0410 22:49:26.681691   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0410 22:49:26.706573   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0410 22:49:26.733054   57270 provision.go:87] duration metric: took 330.073783ms to configureAuth
	I0410 22:49:26.733088   57270 buildroot.go:189] setting minikube options for container-runtime
	I0410 22:49:26.733276   57270 config.go:182] Loaded profile config "no-preload-646133": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.1
	I0410 22:49:26.733347   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:26.735910   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.736264   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:26.736295   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:26.736474   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:26.736648   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.736798   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:26.736925   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:26.737055   57270 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:26.737225   57270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0410 22:49:26.737241   57270 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0410 22:49:27.008174   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0410 22:49:27.008202   57270 machine.go:97] duration metric: took 970.785508ms to provisionDockerMachine
	I0410 22:49:27.008216   57270 start.go:293] postStartSetup for "no-preload-646133" (driver="kvm2")
	I0410 22:49:27.008236   57270 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0410 22:49:27.008263   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:27.008554   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0410 22:49:27.008580   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:27.011150   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.011561   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:27.011604   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.011900   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:27.012090   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:27.012274   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:27.012432   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:49:27.105247   57270 ssh_runner.go:195] Run: cat /etc/os-release
	I0410 22:49:27.109842   57270 info.go:137] Remote host: Buildroot 2023.02.9
	I0410 22:49:27.109868   57270 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/addons for local assets ...
	I0410 22:49:27.109927   57270 filesync.go:126] Scanning /home/jenkins/minikube-integration/18610-5679/.minikube/files for local assets ...
	I0410 22:49:27.109993   57270 filesync.go:149] local asset: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem -> 130012.pem in /etc/ssl/certs
	I0410 22:49:27.110080   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0410 22:49:27.121451   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:49:27.151797   57270 start.go:296] duration metric: took 143.569287ms for postStartSetup
	I0410 22:49:27.151836   57270 fix.go:56] duration metric: took 19.642403615s for fixHost
	I0410 22:49:27.151865   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:27.154454   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.154869   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:27.154903   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.154987   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:27.155193   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:27.155357   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:27.155512   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:27.155660   57270 main.go:141] libmachine: Using SSH client type: native
	I0410 22:49:27.155862   57270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.17 22 <nil> <nil>}
	I0410 22:49:27.155875   57270 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0410 22:49:27.265609   57270 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712789367.209761579
	
	I0410 22:49:27.265652   57270 fix.go:216] guest clock: 1712789367.209761579
	I0410 22:49:27.265662   57270 fix.go:229] Guest: 2024-04-10 22:49:27.209761579 +0000 UTC Remote: 2024-04-10 22:49:27.151840464 +0000 UTC m=+377.371052419 (delta=57.921115ms)
	I0410 22:49:27.265687   57270 fix.go:200] guest clock delta is within tolerance: 57.921115ms
	I0410 22:49:27.265697   57270 start.go:83] releasing machines lock for "no-preload-646133", held for 19.756293566s
	I0410 22:49:27.265724   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:27.265960   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetIP
	I0410 22:49:27.268735   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.269184   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:27.269216   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.269380   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:27.270014   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:27.270233   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:49:27.270331   57270 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0410 22:49:27.270376   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:27.270645   57270 ssh_runner.go:195] Run: cat /version.json
	I0410 22:49:27.270669   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:49:27.273542   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.273846   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.273986   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:27.274019   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.274140   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:27.274230   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:27.274259   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:27.274318   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:27.274400   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:49:27.274531   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:49:27.274536   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:27.274688   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:49:27.274723   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:49:27.274806   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:49:27.359922   57270 ssh_runner.go:195] Run: systemctl --version
	I0410 22:49:27.400885   57270 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0410 22:49:27.555260   57270 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0410 22:49:27.561275   57270 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0410 22:49:27.561333   57270 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0410 22:49:27.578478   57270 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0410 22:49:27.578502   57270 start.go:494] detecting cgroup driver to use...
	I0410 22:49:27.578567   57270 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0410 22:49:27.598020   57270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0410 22:49:27.613068   57270 docker.go:217] disabling cri-docker service (if available) ...
	I0410 22:49:27.613140   57270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0410 22:49:27.629253   57270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0410 22:49:27.644130   57270 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0410 22:49:27.791801   57270 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0410 22:49:27.952366   57270 docker.go:233] disabling docker service ...
	I0410 22:49:27.952477   57270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0410 22:49:27.968629   57270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0410 22:49:27.982330   57270 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0410 22:49:28.117396   57270 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0410 22:49:28.240808   57270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0410 22:49:28.257299   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0410 22:49:28.280918   57270 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0410 22:49:28.280991   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.296415   57270 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0410 22:49:28.296480   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.308602   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.319535   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.329812   57270 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0410 22:49:28.341466   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.354706   57270 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.374405   57270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0410 22:49:28.385094   57270 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0410 22:49:28.394412   57270 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0410 22:49:28.394466   57270 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0410 22:49:28.407654   57270 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0410 22:49:28.418381   57270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:49:28.525783   57270 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0410 22:49:28.678643   57270 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0410 22:49:28.678706   57270 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0410 22:49:28.683681   57270 start.go:562] Will wait 60s for crictl version
	I0410 22:49:28.683737   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:28.687703   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0410 22:49:28.725311   57270 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0410 22:49:28.725414   57270 ssh_runner.go:195] Run: crio --version
	I0410 22:49:28.755393   57270 ssh_runner.go:195] Run: crio --version
	I0410 22:49:28.788963   57270 out.go:177] * Preparing Kubernetes v1.30.0-rc.1 on CRI-O 1.29.1 ...
	I0410 22:49:28.790274   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetIP
	I0410 22:49:28.793091   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:28.793418   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:49:28.793452   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:49:28.793659   57270 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0410 22:49:28.798916   57270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:49:28.814575   57270 kubeadm.go:877] updating cluster {Name:no-preload-646133 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.1 ClusterName:no-preload-646133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.17 Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0410 22:49:28.814689   57270 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.1 and runtime crio
	I0410 22:49:28.814717   57270 ssh_runner.go:195] Run: sudo crictl images --output json
	I0410 22:49:28.852604   57270 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.1". assuming images are not preloaded.
	I0410 22:49:28.852627   57270 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-rc.1 registry.k8s.io/kube-controller-manager:v1.30.0-rc.1 registry.k8s.io/kube-scheduler:v1.30.0-rc.1 registry.k8s.io/kube-proxy:v1.30.0-rc.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0410 22:49:28.852698   57270 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:28.852707   57270 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.1
	I0410 22:49:28.852733   57270 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0410 22:49:28.852756   57270 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0410 22:49:28.852803   57270 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.1
	I0410 22:49:28.852870   57270 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.1
	I0410 22:49:28.852890   57270 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.1
	I0410 22:49:28.852917   57270 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0410 22:49:28.854348   57270 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.1
	I0410 22:49:28.854354   57270 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.1
	I0410 22:49:28.854378   57270 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0410 22:49:28.854419   57270 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0410 22:49:28.854421   57270 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.1
	I0410 22:49:28.854355   57270 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.1
	I0410 22:49:28.854353   57270 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:28.854740   57270 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0410 22:49:29.066608   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0410 22:49:29.072486   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-rc.1
	I0410 22:49:29.073347   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0410 22:49:29.075270   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0410 22:49:29.082649   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-rc.1
	I0410 22:49:29.085737   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-rc.1
	I0410 22:49:29.093699   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-rc.1
	I0410 22:49:29.290780   57270 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-rc.1" does not exist at hash "ec89f92328e612508517005987e6c1a6986d25f4a98e23718c9f81c8469f0a9b" in container runtime
	I0410 22:49:29.290810   57270 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0410 22:49:29.290839   57270 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0410 22:49:29.290837   57270 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-rc.1
	I0410 22:49:29.290849   57270 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0410 22:49:29.290871   57270 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0410 22:49:29.290882   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.290902   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.290882   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.304346   57270 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-rc.1" does not exist at hash "69f89e5e13a4119f8752cd7f0fb30e3db4ce480a5e08580a3bf72597464bd061" in container runtime
	I0410 22:49:29.304409   57270 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-rc.1
	I0410 22:49:29.304459   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.304510   57270 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-rc.1" does not exist at hash "bae20da4f5927473c4af74f5e61f8a97ab9a0c387c8441b32bbd786696d1b895" in container runtime
	I0410 22:49:29.304599   57270 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-rc.1
	I0410 22:49:29.304635   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.304563   57270 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-rc.1" does not exist at hash "577298566a55a1e59cf3a32f087ff6069addd375fa0a2ec78c4634f77c88c090" in container runtime
	I0410 22:49:29.304689   57270 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.1
	I0410 22:49:29.304738   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:29.311219   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0410 22:49:29.311264   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.1
	I0410 22:49:29.311311   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0410 22:49:29.324663   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-rc.1
	I0410 22:49:29.324770   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.1
	I0410 22:49:29.324855   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-rc.1
	I0410 22:49:29.442426   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0410 22:49:29.442541   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0410 22:49:29.458416   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0410 22:49:29.458526   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0410 22:49:29.468890   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.1
	I0410 22:49:29.468998   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.1
	I0410 22:49:29.481365   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.1
	I0410 22:49:29.481482   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.1
	I0410 22:49:29.498862   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.1
	I0410 22:49:29.498899   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0410 22:49:29.498913   57270 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0410 22:49:29.498927   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.1
	I0410 22:49:29.498951   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.1 (exists)
	I0410 22:49:29.498957   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0410 22:49:29.498964   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.1
	I0410 22:49:29.498982   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-rc.1 (exists)
	I0410 22:49:29.499012   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.1
	I0410 22:49:29.498926   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0410 22:49:29.507249   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.1 (exists)
	I0410 22:49:29.507282   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.1 (exists)
	I0410 22:49:29.751612   57270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:25.585095   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:26.085119   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:26.585846   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:27.084920   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:27.585251   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:28.084926   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:28.585643   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:29.084937   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:29.585666   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:30.085088   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:29.518476   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:31.518837   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:28.592323   58701 node_ready.go:53] node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:31.098027   58701 node_ready.go:53] node "default-k8s-diff-port-519831" has status "Ready":"False"
	I0410 22:49:31.591789   58701 node_ready.go:49] node "default-k8s-diff-port-519831" has status "Ready":"True"
	I0410 22:49:31.591822   58701 node_ready.go:38] duration metric: took 7.504383585s for node "default-k8s-diff-port-519831" to be "Ready" ...
	I0410 22:49:31.591835   58701 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:49:31.599103   58701 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-ghnvx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:31.607758   58701 pod_ready.go:92] pod "coredns-76f75df574-ghnvx" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:31.607787   58701 pod_ready.go:81] duration metric: took 8.655521ms for pod "coredns-76f75df574-ghnvx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:31.607801   58701 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:33.690936   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.191950196s)
	I0410 22:49:33.690965   57270 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.939318786s)
	I0410 22:49:33.691014   57270 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0410 22:49:33.691045   57270 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:33.690973   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0410 22:49:33.691091   57270 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.1
	I0410 22:49:33.691101   57270 ssh_runner.go:195] Run: which crictl
	I0410 22:49:33.691163   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.1
	I0410 22:49:33.695868   57270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:49:30.585515   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:31.085273   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:31.585347   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:32.084892   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:32.585361   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:33.085648   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:33.585256   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:34.084938   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:34.585005   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:35.085466   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:34.018733   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:36.019904   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:33.615785   58701 pod_ready.go:102] pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:35.115811   58701 pod_ready.go:92] pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:35.115846   58701 pod_ready.go:81] duration metric: took 3.508038321s for pod "etcd-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:35.115856   58701 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.123593   58701 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:37.123624   58701 pod_ready.go:81] duration metric: took 2.007760022s for pod "kube-apiserver-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.123638   58701 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.130390   58701 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:37.130421   58701 pod_ready.go:81] duration metric: took 6.771239ms for pod "kube-controller-manager-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.130436   58701 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5mbwx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.136219   58701 pod_ready.go:92] pod "kube-proxy-5mbwx" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:37.136253   58701 pod_ready.go:81] duration metric: took 5.809077ms for pod "kube-proxy-5mbwx" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.136265   58701 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.142909   58701 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace has status "Ready":"True"
	I0410 22:49:37.142939   58701 pod_ready.go:81] duration metric: took 6.664922ms for pod "kube-scheduler-default-k8s-diff-port-519831" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:37.142954   58701 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:35.767190   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.1: (2.075997626s)
	I0410 22:49:35.767227   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.1 from cache
	I0410 22:49:35.767261   57270 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-rc.1
	I0410 22:49:35.767278   57270 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.071386498s)
	I0410 22:49:35.767326   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.1
	I0410 22:49:35.767327   57270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0410 22:49:35.767497   57270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0410 22:49:35.773679   57270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0410 22:49:37.666289   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.1: (1.898906389s)
	I0410 22:49:37.666326   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.1 from cache
	I0410 22:49:37.666358   57270 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0410 22:49:37.666422   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0410 22:49:39.652778   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.986322091s)
	I0410 22:49:39.652820   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0410 22:49:39.652855   57270 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.1
	I0410 22:49:39.652951   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.1
	I0410 22:49:35.585228   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:36.085699   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:36.585690   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:37.085760   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:37.584867   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:37.584947   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:37.625964   57719 cri.go:89] found id: ""
	I0410 22:49:37.625989   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.625996   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:37.626001   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:37.626046   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:37.669151   57719 cri.go:89] found id: ""
	I0410 22:49:37.669178   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.669188   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:37.669194   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:37.669242   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:37.711426   57719 cri.go:89] found id: ""
	I0410 22:49:37.711456   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.711466   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:37.711474   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:37.711538   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:37.754678   57719 cri.go:89] found id: ""
	I0410 22:49:37.754707   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.754719   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:37.754726   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:37.754809   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:37.795259   57719 cri.go:89] found id: ""
	I0410 22:49:37.795291   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.795301   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:37.795307   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:37.795375   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:37.836961   57719 cri.go:89] found id: ""
	I0410 22:49:37.836994   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.837004   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:37.837011   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:37.837075   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:37.876195   57719 cri.go:89] found id: ""
	I0410 22:49:37.876223   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.876233   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:37.876239   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:37.876290   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:37.911688   57719 cri.go:89] found id: ""
	I0410 22:49:37.911715   57719 logs.go:276] 0 containers: []
	W0410 22:49:37.911725   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:37.911736   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:37.911751   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:37.954690   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:37.954734   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:38.006731   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:38.006771   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:38.024290   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:38.024314   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:38.148504   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:38.148529   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:38.148561   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:38.519483   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:40.520822   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:39.150543   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:41.151300   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:42.217749   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.1: (2.564772479s)
	I0410 22:49:42.217778   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.1 from cache
	I0410 22:49:42.217802   57270 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.1
	I0410 22:49:42.217843   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.1
	I0410 22:49:44.577826   57270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.1: (2.359955682s)
	I0410 22:49:44.577865   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.1 from cache
	I0410 22:49:44.577892   57270 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0410 22:49:44.577940   57270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0410 22:49:40.726314   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:40.743098   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:40.743168   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:40.794673   57719 cri.go:89] found id: ""
	I0410 22:49:40.794697   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.794704   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:40.794710   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:40.794756   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:40.836274   57719 cri.go:89] found id: ""
	I0410 22:49:40.836308   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.836319   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:40.836327   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:40.836408   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:40.882249   57719 cri.go:89] found id: ""
	I0410 22:49:40.882276   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.882285   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:40.882292   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:40.882357   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:40.925829   57719 cri.go:89] found id: ""
	I0410 22:49:40.925867   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.925878   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:40.925885   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:40.925936   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:40.978494   57719 cri.go:89] found id: ""
	I0410 22:49:40.978529   57719 logs.go:276] 0 containers: []
	W0410 22:49:40.978540   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:40.978547   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:40.978611   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:41.020935   57719 cri.go:89] found id: ""
	I0410 22:49:41.020964   57719 logs.go:276] 0 containers: []
	W0410 22:49:41.020975   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:41.020982   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:41.021040   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:41.060779   57719 cri.go:89] found id: ""
	I0410 22:49:41.060812   57719 logs.go:276] 0 containers: []
	W0410 22:49:41.060824   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:41.060831   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:41.060885   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:41.119604   57719 cri.go:89] found id: ""
	I0410 22:49:41.119632   57719 logs.go:276] 0 containers: []
	W0410 22:49:41.119643   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:41.119653   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:41.119667   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:41.188739   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:41.188774   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:41.203682   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:41.203735   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:41.293423   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:41.293451   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:41.293468   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:41.366606   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:41.366649   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:43.914447   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:43.930350   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:43.930439   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:43.968867   57719 cri.go:89] found id: ""
	I0410 22:49:43.968921   57719 logs.go:276] 0 containers: []
	W0410 22:49:43.968932   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:43.968939   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:43.969012   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:44.010143   57719 cri.go:89] found id: ""
	I0410 22:49:44.010169   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.010181   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:44.010188   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:44.010264   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:44.048610   57719 cri.go:89] found id: ""
	I0410 22:49:44.048637   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.048645   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:44.048651   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:44.048697   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:44.105939   57719 cri.go:89] found id: ""
	I0410 22:49:44.105973   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.106001   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:44.106009   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:44.106086   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:44.149699   57719 cri.go:89] found id: ""
	I0410 22:49:44.149726   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.149735   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:44.149743   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:44.149803   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:44.193131   57719 cri.go:89] found id: ""
	I0410 22:49:44.193159   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.193167   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:44.193173   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:44.193255   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:44.233751   57719 cri.go:89] found id: ""
	I0410 22:49:44.233781   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.233789   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:44.233801   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:44.233868   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:44.284404   57719 cri.go:89] found id: ""
	I0410 22:49:44.284432   57719 logs.go:276] 0 containers: []
	W0410 22:49:44.284441   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:44.284449   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:44.284461   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:44.330082   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:44.330118   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:44.383452   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:44.383487   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:44.399604   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:44.399632   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:44.476328   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:44.476368   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:44.476415   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:43.019922   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:45.519954   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:43.650596   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:45.651668   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:45.537183   57270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18610-5679/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0410 22:49:45.537228   57270 cache_images.go:123] Successfully loaded all cached images
	I0410 22:49:45.537235   57270 cache_images.go:92] duration metric: took 16.68459637s to LoadCachedImages
	I0410 22:49:45.537249   57270 kubeadm.go:928] updating node { 192.168.50.17 8443 v1.30.0-rc.1 crio true true} ...
	I0410 22:49:45.537401   57270 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-646133 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.1 ClusterName:no-preload-646133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0410 22:49:45.537476   57270 ssh_runner.go:195] Run: crio config
	I0410 22:49:45.587002   57270 cni.go:84] Creating CNI manager for ""
	I0410 22:49:45.587031   57270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:49:45.587047   57270 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0410 22:49:45.587069   57270 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.17 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-646133 NodeName:no-preload-646133 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.17"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.17 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0410 22:49:45.587205   57270 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.17
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-646133"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.17
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.17"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0410 22:49:45.587272   57270 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.1
	I0410 22:49:45.600694   57270 binaries.go:44] Found k8s binaries, skipping transfer
	I0410 22:49:45.600758   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0410 22:49:45.613884   57270 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0410 22:49:45.633871   57270 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0410 22:49:45.654733   57270 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0410 22:49:45.673976   57270 ssh_runner.go:195] Run: grep 192.168.50.17	control-plane.minikube.internal$ /etc/hosts
	I0410 22:49:45.678260   57270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.17	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0410 22:49:45.693499   57270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:49:45.819034   57270 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:49:45.838775   57270 certs.go:68] Setting up /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133 for IP: 192.168.50.17
	I0410 22:49:45.838799   57270 certs.go:194] generating shared ca certs ...
	I0410 22:49:45.838819   57270 certs.go:226] acquiring lock for ca certs: {Name:mkdf516eef4dca65fdb46927a7d7b777d4098df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:49:45.839010   57270 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key
	I0410 22:49:45.839064   57270 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key
	I0410 22:49:45.839078   57270 certs.go:256] generating profile certs ...
	I0410 22:49:45.839175   57270 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/client.key
	I0410 22:49:45.839256   57270 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/apiserver.key.d257fb06
	I0410 22:49:45.839310   57270 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/proxy-client.key
	I0410 22:49:45.839480   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem (1338 bytes)
	W0410 22:49:45.839521   57270 certs.go:480] ignoring /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001_empty.pem, impossibly tiny 0 bytes
	I0410 22:49:45.839531   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca-key.pem (1679 bytes)
	I0410 22:49:45.839551   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/ca.pem (1082 bytes)
	I0410 22:49:45.839608   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/cert.pem (1123 bytes)
	I0410 22:49:45.839633   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/certs/key.pem (1679 bytes)
	I0410 22:49:45.839674   57270 certs.go:484] found cert: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem (1708 bytes)
	I0410 22:49:45.840315   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0410 22:49:45.897688   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0410 22:49:45.932242   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0410 22:49:45.979537   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0410 22:49:46.020562   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0410 22:49:46.057254   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0410 22:49:46.084070   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0410 22:49:46.112807   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0410 22:49:46.141650   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/ssl/certs/130012.pem --> /usr/share/ca-certificates/130012.pem (1708 bytes)
	I0410 22:49:46.170167   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0410 22:49:46.196917   57270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18610-5679/.minikube/certs/13001.pem --> /usr/share/ca-certificates/13001.pem (1338 bytes)
	I0410 22:49:46.222645   57270 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0410 22:49:46.242626   57270 ssh_runner.go:195] Run: openssl version
	I0410 22:49:46.249048   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0410 22:49:46.265110   57270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:46.270018   57270 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 10 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:46.270083   57270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0410 22:49:46.276298   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0410 22:49:46.288165   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13001.pem && ln -fs /usr/share/ca-certificates/13001.pem /etc/ssl/certs/13001.pem"
	I0410 22:49:46.299040   57270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13001.pem
	I0410 22:49:46.303584   57270 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 10 21:38 /usr/share/ca-certificates/13001.pem
	I0410 22:49:46.303627   57270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13001.pem
	I0410 22:49:46.309278   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13001.pem /etc/ssl/certs/51391683.0"
	I0410 22:49:46.319990   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130012.pem && ln -fs /usr/share/ca-certificates/130012.pem /etc/ssl/certs/130012.pem"
	I0410 22:49:46.331654   57270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130012.pem
	I0410 22:49:46.336700   57270 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 10 21:38 /usr/share/ca-certificates/130012.pem
	I0410 22:49:46.336750   57270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130012.pem
	I0410 22:49:46.342767   57270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130012.pem /etc/ssl/certs/3ec20f2e.0"
	I0410 22:49:46.355005   57270 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0410 22:49:46.359870   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0410 22:49:46.366270   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0410 22:49:46.372625   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0410 22:49:46.379270   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0410 22:49:46.386312   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0410 22:49:46.392796   57270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0410 22:49:46.399209   57270 kubeadm.go:391] StartCluster: {Name:no-preload-646133 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.1 ClusterName:no-preload-646133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.17 Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 22:49:46.399318   57270 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0410 22:49:46.399405   57270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:49:46.439061   57270 cri.go:89] found id: ""
	I0410 22:49:46.439149   57270 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0410 22:49:46.450243   57270 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0410 22:49:46.450265   57270 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0410 22:49:46.450271   57270 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0410 22:49:46.450323   57270 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0410 22:49:46.460553   57270 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0410 22:49:46.461608   57270 kubeconfig.go:125] found "no-preload-646133" server: "https://192.168.50.17:8443"
	I0410 22:49:46.464469   57270 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0410 22:49:46.474775   57270 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.17
	I0410 22:49:46.474808   57270 kubeadm.go:1154] stopping kube-system containers ...
	I0410 22:49:46.474820   57270 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0410 22:49:46.474860   57270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0410 22:49:46.514933   57270 cri.go:89] found id: ""
	I0410 22:49:46.515010   57270 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0410 22:49:46.533830   57270 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:49:46.547026   57270 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:49:46.547042   57270 kubeadm.go:156] found existing configuration files:
	
	I0410 22:49:46.547081   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:49:46.557093   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:49:46.557157   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:49:46.567102   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:49:46.576939   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:49:46.576998   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:49:46.586921   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:49:46.596189   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:49:46.596260   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:49:46.607803   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:49:46.618166   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:49:46.618240   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:49:46.628406   57270 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:49:46.638748   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:46.767824   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:48.028868   57270 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.261006059s)
	I0410 22:49:48.028907   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:48.253185   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:48.323164   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:48.404069   57270 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:49:48.404153   57270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:48.904557   57270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:49.404477   57270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:49.437891   57270 api_server.go:72] duration metric: took 1.033818826s to wait for apiserver process to appear ...
	I0410 22:49:49.437927   57270 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:49:49.437953   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:49.438623   57270 api_server.go:269] stopped: https://192.168.50.17:8443/healthz: Get "https://192.168.50.17:8443/healthz": dial tcp 192.168.50.17:8443: connect: connection refused
	I0410 22:49:47.054122   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:47.069583   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:47.069654   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:47.113953   57719 cri.go:89] found id: ""
	I0410 22:49:47.113981   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.113989   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:47.113995   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:47.114054   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:47.156770   57719 cri.go:89] found id: ""
	I0410 22:49:47.156798   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.156808   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:47.156814   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:47.156891   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:47.195227   57719 cri.go:89] found id: ""
	I0410 22:49:47.195252   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.195261   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:47.195266   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:47.195328   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:47.238109   57719 cri.go:89] found id: ""
	I0410 22:49:47.238138   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.238150   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:47.238157   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:47.238212   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:47.285062   57719 cri.go:89] found id: ""
	I0410 22:49:47.285093   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.285101   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:47.285108   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:47.285185   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:47.324635   57719 cri.go:89] found id: ""
	I0410 22:49:47.324663   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.324670   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:47.324676   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:47.324744   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:47.365404   57719 cri.go:89] found id: ""
	I0410 22:49:47.365437   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.365445   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:47.365468   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:47.365535   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:47.412296   57719 cri.go:89] found id: ""
	I0410 22:49:47.412335   57719 logs.go:276] 0 containers: []
	W0410 22:49:47.412346   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:47.412367   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:47.412384   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:47.497998   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:47.498019   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:47.498033   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:47.590502   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:47.590536   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:47.647665   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:47.647692   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:47.697704   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:47.697741   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:50.213410   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:50.229408   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:50.229488   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:50.268514   57719 cri.go:89] found id: ""
	I0410 22:49:50.268545   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.268556   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:50.268563   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:50.268620   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:50.308733   57719 cri.go:89] found id: ""
	I0410 22:49:50.308762   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.308790   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:50.308796   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:50.308857   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:50.353929   57719 cri.go:89] found id: ""
	I0410 22:49:50.353966   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.353977   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:50.353985   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:50.354043   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:50.397979   57719 cri.go:89] found id: ""
	I0410 22:49:50.398009   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.398019   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:50.398026   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:50.398086   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:47.521284   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:50.018571   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:52.020874   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:48.151768   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:50.151820   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:49.939075   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:52.355813   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0410 22:49:52.355855   57270 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0410 22:49:52.355868   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:52.502702   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0410 22:49:52.502733   57270 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0410 22:49:52.502796   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:52.509360   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0410 22:49:52.509401   57270 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0410 22:49:52.939056   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:52.946114   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0410 22:49:52.946154   57270 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0410 22:49:53.438741   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:53.444154   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0410 22:49:53.444187   57270 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0410 22:49:53.938848   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:49:53.947578   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 200:
	ok
	I0410 22:49:53.956247   57270 api_server.go:141] control plane version: v1.30.0-rc.1
	I0410 22:49:53.956281   57270 api_server.go:131] duration metric: took 4.518344859s to wait for apiserver health ...
	I0410 22:49:53.956292   57270 cni.go:84] Creating CNI manager for ""
	I0410 22:49:53.956301   57270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:49:53.958053   57270 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0410 22:49:53.959420   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0410 22:49:53.973242   57270 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0410 22:49:54.004623   57270 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:49:54.024138   57270 system_pods.go:59] 8 kube-system pods found
	I0410 22:49:54.024185   57270 system_pods.go:61] "coredns-7db6d8ff4d-lbcp6" [1ff36529-d718-41e7-9b61-54ba32efab0e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0410 22:49:54.024195   57270 system_pods.go:61] "etcd-no-preload-646133" [a704a953-1418-4425-8ac1-272c632050c7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0410 22:49:54.024214   57270 system_pods.go:61] "kube-apiserver-no-preload-646133" [90d4ff18-767c-4dbf-b4ad-ff02cb3d542f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0410 22:49:54.024231   57270 system_pods.go:61] "kube-controller-manager-no-preload-646133" [82c0778e-690f-41a6-a57f-017ab79fd029] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0410 22:49:54.024243   57270 system_pods.go:61] "kube-proxy-v5fbl" [002efd18-4375-455b-9b4a-15bb739120e0] Running
	I0410 22:49:54.024252   57270 system_pods.go:61] "kube-scheduler-no-preload-646133" [fa9898bc-36a6-4cc4-91e6-bba4ccd22d9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0410 22:49:54.024264   57270 system_pods.go:61] "metrics-server-569cc877fc-pw276" [22de5c2f-13ab-4f69-8eb6-ec4a3c3d1e80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:49:54.024277   57270 system_pods.go:61] "storage-provisioner" [1028921e-3924-4614-bcb6-f949c18e9e4e] Running
	I0410 22:49:54.024287   57270 system_pods.go:74] duration metric: took 19.638409ms to wait for pod list to return data ...
	I0410 22:49:54.024301   57270 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:49:54.031666   57270 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:49:54.031694   57270 node_conditions.go:123] node cpu capacity is 2
	I0410 22:49:54.031705   57270 node_conditions.go:105] duration metric: took 7.394201ms to run NodePressure ...
	I0410 22:49:54.031720   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0410 22:49:54.339352   57270 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0410 22:49:54.345115   57270 kubeadm.go:733] kubelet initialised
	I0410 22:49:54.345146   57270 kubeadm.go:734] duration metric: took 5.76519ms waiting for restarted kubelet to initialise ...
	I0410 22:49:54.345156   57270 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:49:54.352254   57270 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-lbcp6" in "kube-system" namespace to be "Ready" ...
	I0410 22:49:50.436191   57719 cri.go:89] found id: ""
	I0410 22:49:50.436222   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.436234   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:50.436241   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:50.436316   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:50.476462   57719 cri.go:89] found id: ""
	I0410 22:49:50.476486   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.476494   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:50.476499   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:50.476557   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:50.520025   57719 cri.go:89] found id: ""
	I0410 22:49:50.520054   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.520063   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:50.520071   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:50.520127   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:50.564535   57719 cri.go:89] found id: ""
	I0410 22:49:50.564570   57719 logs.go:276] 0 containers: []
	W0410 22:49:50.564581   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:50.564593   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:50.564624   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:50.620587   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:50.620629   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:50.634802   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:50.634832   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:50.707625   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:50.707655   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:50.707671   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:50.791935   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:50.791970   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:53.339109   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:53.361555   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:53.361632   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:53.428170   57719 cri.go:89] found id: ""
	I0410 22:49:53.428202   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.428212   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:53.428219   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:53.428281   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:53.501929   57719 cri.go:89] found id: ""
	I0410 22:49:53.501957   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.501968   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:53.501977   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:53.502055   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:53.548844   57719 cri.go:89] found id: ""
	I0410 22:49:53.548871   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.548890   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:53.548897   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:53.548949   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:53.595056   57719 cri.go:89] found id: ""
	I0410 22:49:53.595081   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.595090   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:53.595098   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:53.595153   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:53.638885   57719 cri.go:89] found id: ""
	I0410 22:49:53.638920   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.638938   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:53.638946   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:53.639046   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:53.685526   57719 cri.go:89] found id: ""
	I0410 22:49:53.685565   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.685573   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:53.685579   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:53.685650   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:53.725084   57719 cri.go:89] found id: ""
	I0410 22:49:53.725112   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.725119   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:53.725125   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:53.725172   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:53.767031   57719 cri.go:89] found id: ""
	I0410 22:49:53.767062   57719 logs.go:276] 0 containers: []
	W0410 22:49:53.767072   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:53.767083   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:53.767103   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:53.826570   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:53.826618   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:53.843784   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:53.843822   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:53.926277   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:53.926299   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:53.926317   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:54.024735   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:54.024782   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:54.519305   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:56.520139   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:52.651382   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:55.149798   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:57.150803   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:56.359479   57270 pod_ready.go:102] pod "coredns-7db6d8ff4d-lbcp6" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:58.859341   57270 pod_ready.go:102] pod "coredns-7db6d8ff4d-lbcp6" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:56.586265   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:56.602113   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:56.602200   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:56.647041   57719 cri.go:89] found id: ""
	I0410 22:49:56.647074   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.647086   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:56.647094   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:56.647168   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:56.688053   57719 cri.go:89] found id: ""
	I0410 22:49:56.688086   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.688096   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:56.688104   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:56.688190   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:56.729176   57719 cri.go:89] found id: ""
	I0410 22:49:56.729210   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.729221   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:56.729229   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:56.729293   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:56.768877   57719 cri.go:89] found id: ""
	I0410 22:49:56.768905   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.768913   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:56.768919   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:56.768966   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:56.807228   57719 cri.go:89] found id: ""
	I0410 22:49:56.807274   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.807286   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:56.807294   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:56.807361   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:56.848183   57719 cri.go:89] found id: ""
	I0410 22:49:56.848216   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.848224   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:56.848230   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:56.848284   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:49:56.887894   57719 cri.go:89] found id: ""
	I0410 22:49:56.887923   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.887931   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:49:56.887937   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:49:56.887993   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:49:56.926908   57719 cri.go:89] found id: ""
	I0410 22:49:56.926935   57719 logs.go:276] 0 containers: []
	W0410 22:49:56.926944   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:49:56.926952   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:49:56.926968   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:49:57.012614   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:49:57.012640   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:49:57.012657   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:49:57.098735   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:49:57.098784   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:49:57.140798   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:49:57.140831   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:57.204239   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:49:57.204283   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:49:59.720328   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:49:59.735964   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:49:59.736042   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:49:59.774351   57719 cri.go:89] found id: ""
	I0410 22:49:59.774383   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.774393   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:49:59.774407   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:49:59.774468   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:49:59.817222   57719 cri.go:89] found id: ""
	I0410 22:49:59.817248   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.817255   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:49:59.817260   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:49:59.817310   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:49:59.854551   57719 cri.go:89] found id: ""
	I0410 22:49:59.854582   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.854594   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:49:59.854602   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:49:59.854656   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:49:59.894334   57719 cri.go:89] found id: ""
	I0410 22:49:59.894367   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.894375   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:49:59.894381   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:49:59.894442   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:49:59.932446   57719 cri.go:89] found id: ""
	I0410 22:49:59.932472   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.932482   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:49:59.932489   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:49:59.932552   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:49:59.969168   57719 cri.go:89] found id: ""
	I0410 22:49:59.969193   57719 logs.go:276] 0 containers: []
	W0410 22:49:59.969201   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:49:59.969209   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:49:59.969273   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:00.006918   57719 cri.go:89] found id: ""
	I0410 22:50:00.006960   57719 logs.go:276] 0 containers: []
	W0410 22:50:00.006972   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:00.006979   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:00.007036   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:00.050380   57719 cri.go:89] found id: ""
	I0410 22:50:00.050411   57719 logs.go:276] 0 containers: []
	W0410 22:50:00.050424   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:00.050433   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:00.050454   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:00.066340   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:00.066366   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:00.146454   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:00.146479   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:00.146494   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:00.231174   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:00.231225   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:00.278732   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:00.278759   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:49:59.020938   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:01.518584   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:49:59.151137   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:01.650307   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:01.359992   57270 pod_ready.go:92] pod "coredns-7db6d8ff4d-lbcp6" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:01.360021   57270 pod_ready.go:81] duration metric: took 7.007734788s for pod "coredns-7db6d8ff4d-lbcp6" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:01.360035   57270 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:02.867322   57270 pod_ready.go:92] pod "etcd-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:02.867349   57270 pod_ready.go:81] duration metric: took 1.507305949s for pod "etcd-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:02.867362   57270 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:02.833035   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:02.847316   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:02.847380   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:02.888793   57719 cri.go:89] found id: ""
	I0410 22:50:02.888821   57719 logs.go:276] 0 containers: []
	W0410 22:50:02.888832   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:02.888840   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:02.888897   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:02.926495   57719 cri.go:89] found id: ""
	I0410 22:50:02.926525   57719 logs.go:276] 0 containers: []
	W0410 22:50:02.926535   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:02.926542   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:02.926603   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:02.966185   57719 cri.go:89] found id: ""
	I0410 22:50:02.966217   57719 logs.go:276] 0 containers: []
	W0410 22:50:02.966227   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:02.966233   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:02.966295   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:03.007383   57719 cri.go:89] found id: ""
	I0410 22:50:03.007408   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.007414   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:03.007420   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:03.007490   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:03.044245   57719 cri.go:89] found id: ""
	I0410 22:50:03.044273   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.044281   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:03.044292   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:03.044367   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:03.078820   57719 cri.go:89] found id: ""
	I0410 22:50:03.078849   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.078859   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:03.078866   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:03.078927   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:03.117205   57719 cri.go:89] found id: ""
	I0410 22:50:03.117233   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.117244   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:03.117251   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:03.117313   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:03.155698   57719 cri.go:89] found id: ""
	I0410 22:50:03.155725   57719 logs.go:276] 0 containers: []
	W0410 22:50:03.155735   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:03.155743   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:03.155758   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:03.231685   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:03.231712   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:03.231724   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:03.315122   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:03.315167   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:03.361151   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:03.361186   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:03.412134   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:03.412168   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:04.017523   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:06.024382   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:04.150291   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:06.151488   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:04.873656   57270 pod_ready.go:102] pod "kube-apiserver-no-preload-646133" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:05.874079   57270 pod_ready.go:92] pod "kube-apiserver-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:05.874106   57270 pod_ready.go:81] duration metric: took 3.006735064s for pod "kube-apiserver-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:05.874116   57270 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:07.880447   57270 pod_ready.go:102] pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:08.881209   57270 pod_ready.go:92] pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:08.881241   57270 pod_ready.go:81] duration metric: took 3.007117254s for pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:08.881271   57270 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v5fbl" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:08.887939   57270 pod_ready.go:92] pod "kube-proxy-v5fbl" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:08.887963   57270 pod_ready.go:81] duration metric: took 6.68304ms for pod "kube-proxy-v5fbl" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:08.887975   57270 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:08.894389   57270 pod_ready.go:92] pod "kube-scheduler-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:50:08.894415   57270 pod_ready.go:81] duration metric: took 6.43215ms for pod "kube-scheduler-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:08.894428   57270 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace to be "Ready" ...
	I0410 22:50:05.928116   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:05.942237   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:05.942337   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:05.983813   57719 cri.go:89] found id: ""
	I0410 22:50:05.983842   57719 logs.go:276] 0 containers: []
	W0410 22:50:05.983853   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:05.983861   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:05.983945   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:06.024590   57719 cri.go:89] found id: ""
	I0410 22:50:06.024618   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.024626   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:06.024637   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:06.024698   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:06.063040   57719 cri.go:89] found id: ""
	I0410 22:50:06.063075   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.063087   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:06.063094   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:06.063160   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:06.102224   57719 cri.go:89] found id: ""
	I0410 22:50:06.102250   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.102259   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:06.102273   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:06.102342   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:06.144202   57719 cri.go:89] found id: ""
	I0410 22:50:06.144229   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.144236   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:06.144242   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:06.144288   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:06.189215   57719 cri.go:89] found id: ""
	I0410 22:50:06.189243   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.189250   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:06.189256   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:06.189308   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:06.225218   57719 cri.go:89] found id: ""
	I0410 22:50:06.225247   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.225258   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:06.225266   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:06.225330   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:06.265229   57719 cri.go:89] found id: ""
	I0410 22:50:06.265262   57719 logs.go:276] 0 containers: []
	W0410 22:50:06.265273   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:06.265283   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:06.265306   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:06.279794   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:06.279825   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:06.348038   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:06.348063   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:06.348079   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:06.431293   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:06.431339   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:06.476033   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:06.476060   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:09.032099   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:09.046628   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:09.046765   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:09.086900   57719 cri.go:89] found id: ""
	I0410 22:50:09.086928   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.086936   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:09.086942   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:09.086998   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:09.124989   57719 cri.go:89] found id: ""
	I0410 22:50:09.125018   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.125028   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:09.125035   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:09.125096   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:09.163720   57719 cri.go:89] found id: ""
	I0410 22:50:09.163749   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.163761   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:09.163769   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:09.163822   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:09.203846   57719 cri.go:89] found id: ""
	I0410 22:50:09.203875   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.203883   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:09.203888   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:09.203945   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:09.242974   57719 cri.go:89] found id: ""
	I0410 22:50:09.243002   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.243016   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:09.243024   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:09.243092   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:09.278664   57719 cri.go:89] found id: ""
	I0410 22:50:09.278687   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.278694   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:09.278700   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:09.278762   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:09.313335   57719 cri.go:89] found id: ""
	I0410 22:50:09.313359   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.313367   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:09.313372   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:09.313419   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:09.351160   57719 cri.go:89] found id: ""
	I0410 22:50:09.351195   57719 logs.go:276] 0 containers: []
	W0410 22:50:09.351206   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:09.351225   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:09.351239   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:09.425989   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:09.426015   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:09.426033   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:09.505189   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:09.505223   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:09.549619   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:09.549651   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:09.604322   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:09.604360   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:08.520115   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:11.018253   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:08.649190   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:10.650453   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:10.903726   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:13.401154   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:12.119780   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:12.135377   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:12.135458   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:12.178105   57719 cri.go:89] found id: ""
	I0410 22:50:12.178129   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.178138   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:12.178144   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:12.178207   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:12.217369   57719 cri.go:89] found id: ""
	I0410 22:50:12.217397   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.217409   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:12.217424   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:12.217488   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:12.254185   57719 cri.go:89] found id: ""
	I0410 22:50:12.254213   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.254222   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:12.254230   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:12.254291   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:12.295007   57719 cri.go:89] found id: ""
	I0410 22:50:12.295038   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.295048   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:12.295057   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:12.295125   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:12.334620   57719 cri.go:89] found id: ""
	I0410 22:50:12.334644   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.334651   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:12.334657   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:12.334707   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:12.371217   57719 cri.go:89] found id: ""
	I0410 22:50:12.371241   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.371249   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:12.371255   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:12.371302   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:12.409571   57719 cri.go:89] found id: ""
	I0410 22:50:12.409599   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.409608   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:12.409617   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:12.409675   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:12.453133   57719 cri.go:89] found id: ""
	I0410 22:50:12.453159   57719 logs.go:276] 0 containers: []
	W0410 22:50:12.453169   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:12.453180   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:12.453194   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:12.505322   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:12.505360   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:12.520284   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:12.520315   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:12.608057   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:12.608082   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:12.608097   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:12.693240   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:12.693274   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:15.244628   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:15.261915   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:15.262020   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:15.302874   57719 cri.go:89] found id: ""
	I0410 22:50:15.302903   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.302910   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:15.302916   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:15.302973   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:15.347492   57719 cri.go:89] found id: ""
	I0410 22:50:15.347518   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.347527   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:15.347534   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:15.347598   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:15.394156   57719 cri.go:89] found id: ""
	I0410 22:50:15.394188   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.394198   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:15.394205   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:15.394265   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:13.518316   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:15.520507   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:13.150145   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:15.651083   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:15.401582   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:17.901179   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:15.437656   57719 cri.go:89] found id: ""
	I0410 22:50:15.437682   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.437690   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:15.437695   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:15.437748   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:15.475658   57719 cri.go:89] found id: ""
	I0410 22:50:15.475686   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.475697   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:15.475704   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:15.475765   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:15.517908   57719 cri.go:89] found id: ""
	I0410 22:50:15.517930   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.517937   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:15.517942   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:15.517991   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:15.560083   57719 cri.go:89] found id: ""
	I0410 22:50:15.560108   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.560117   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:15.560123   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:15.560178   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:15.603967   57719 cri.go:89] found id: ""
	I0410 22:50:15.603994   57719 logs.go:276] 0 containers: []
	W0410 22:50:15.604002   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:15.604013   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:15.604028   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:15.659994   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:15.660029   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:15.675627   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:15.675658   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:15.761297   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:15.761320   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:15.761339   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:15.839225   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:15.839265   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:18.386062   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:18.399609   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:18.399677   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:18.443002   57719 cri.go:89] found id: ""
	I0410 22:50:18.443030   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.443040   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:18.443048   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:18.443106   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:18.485089   57719 cri.go:89] found id: ""
	I0410 22:50:18.485121   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.485132   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:18.485140   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:18.485200   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:18.524310   57719 cri.go:89] found id: ""
	I0410 22:50:18.524338   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.524347   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:18.524354   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:18.524412   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:18.563535   57719 cri.go:89] found id: ""
	I0410 22:50:18.563573   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.563582   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:18.563587   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:18.563634   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:18.600451   57719 cri.go:89] found id: ""
	I0410 22:50:18.600478   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.600487   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:18.600495   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:18.600562   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:18.640445   57719 cri.go:89] found id: ""
	I0410 22:50:18.640472   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.640480   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:18.640485   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:18.640550   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:18.677691   57719 cri.go:89] found id: ""
	I0410 22:50:18.677725   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.677746   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:18.677754   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:18.677817   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:18.716753   57719 cri.go:89] found id: ""
	I0410 22:50:18.716850   57719 logs.go:276] 0 containers: []
	W0410 22:50:18.716876   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:18.716897   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:18.716918   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:18.804099   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:18.804130   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:18.804144   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:18.883569   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:18.883611   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:18.930014   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:18.930045   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:18.980029   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:18.980065   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:18.018924   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:20.020820   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:18.151029   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:20.650000   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:19.904069   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:22.401462   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:24.401892   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:21.495499   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:21.511001   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:21.511075   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:21.551469   57719 cri.go:89] found id: ""
	I0410 22:50:21.551511   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.551522   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:21.551540   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:21.551605   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:21.590539   57719 cri.go:89] found id: ""
	I0410 22:50:21.590570   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.590580   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:21.590587   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:21.590654   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:21.629005   57719 cri.go:89] found id: ""
	I0410 22:50:21.629030   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.629042   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:21.629048   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:21.629108   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:21.669745   57719 cri.go:89] found id: ""
	I0410 22:50:21.669767   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.669774   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:21.669780   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:21.669834   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:21.707806   57719 cri.go:89] found id: ""
	I0410 22:50:21.707831   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.707839   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:21.707844   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:21.707892   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:21.746698   57719 cri.go:89] found id: ""
	I0410 22:50:21.746727   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.746736   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:21.746742   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:21.746802   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:21.783048   57719 cri.go:89] found id: ""
	I0410 22:50:21.783070   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.783079   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:21.783084   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:21.783131   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:21.822457   57719 cri.go:89] found id: ""
	I0410 22:50:21.822484   57719 logs.go:276] 0 containers: []
	W0410 22:50:21.822492   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:21.822500   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:21.822513   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:21.894706   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:21.894747   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:21.909861   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:21.909903   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:21.999344   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:21.999370   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:21.999386   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:22.080004   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:22.080042   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:24.620924   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:24.634937   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:24.634999   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:24.686619   57719 cri.go:89] found id: ""
	I0410 22:50:24.686644   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.686655   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:24.686662   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:24.686744   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:24.723632   57719 cri.go:89] found id: ""
	I0410 22:50:24.723658   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.723667   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:24.723675   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:24.723738   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:24.760708   57719 cri.go:89] found id: ""
	I0410 22:50:24.760739   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.760750   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:24.760757   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:24.760804   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:24.795680   57719 cri.go:89] found id: ""
	I0410 22:50:24.795712   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.795722   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:24.795729   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:24.795793   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:24.833033   57719 cri.go:89] found id: ""
	I0410 22:50:24.833063   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.833074   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:24.833082   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:24.833130   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:24.872840   57719 cri.go:89] found id: ""
	I0410 22:50:24.872864   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.872871   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:24.872877   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:24.872936   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:24.915640   57719 cri.go:89] found id: ""
	I0410 22:50:24.915678   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.915688   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:24.915696   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:24.915755   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:24.957164   57719 cri.go:89] found id: ""
	I0410 22:50:24.957207   57719 logs.go:276] 0 containers: []
	W0410 22:50:24.957219   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:24.957230   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:24.957244   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:25.006551   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:25.006601   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:25.021623   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:25.021649   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:25.094699   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:25.094722   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:25.094741   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:25.181280   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:25.181316   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:22.518442   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:25.018206   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:22.650481   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:25.151162   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:26.904127   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:29.400642   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:27.723475   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:27.737294   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:27.737381   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:27.776098   57719 cri.go:89] found id: ""
	I0410 22:50:27.776126   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.776138   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:27.776146   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:27.776203   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:27.814324   57719 cri.go:89] found id: ""
	I0410 22:50:27.814352   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.814364   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:27.814371   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:27.814447   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:27.849573   57719 cri.go:89] found id: ""
	I0410 22:50:27.849603   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.849614   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:27.849621   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:27.849682   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:27.888904   57719 cri.go:89] found id: ""
	I0410 22:50:27.888932   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.888940   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:27.888946   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:27.888993   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:27.931772   57719 cri.go:89] found id: ""
	I0410 22:50:27.931800   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.931812   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:27.931821   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:27.931881   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:27.975633   57719 cri.go:89] found id: ""
	I0410 22:50:27.975666   57719 logs.go:276] 0 containers: []
	W0410 22:50:27.975676   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:27.975684   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:27.975736   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:28.012251   57719 cri.go:89] found id: ""
	I0410 22:50:28.012280   57719 logs.go:276] 0 containers: []
	W0410 22:50:28.012290   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:28.012298   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:28.012364   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:28.048848   57719 cri.go:89] found id: ""
	I0410 22:50:28.048886   57719 logs.go:276] 0 containers: []
	W0410 22:50:28.048898   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:28.048908   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:28.048923   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:28.102215   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:28.102257   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:28.118052   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:28.118081   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:28.190738   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:28.190762   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:28.190777   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:28.269294   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:28.269330   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:27.519211   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:29.521111   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:32.017915   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:27.651922   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:30.150852   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:31.401210   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:33.902054   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:30.833927   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:30.848196   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:30.848266   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:30.886077   57719 cri.go:89] found id: ""
	I0410 22:50:30.886117   57719 logs.go:276] 0 containers: []
	W0410 22:50:30.886127   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:30.886133   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:30.886179   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:30.924638   57719 cri.go:89] found id: ""
	I0410 22:50:30.924668   57719 logs.go:276] 0 containers: []
	W0410 22:50:30.924678   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:30.924686   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:30.924762   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:30.961106   57719 cri.go:89] found id: ""
	I0410 22:50:30.961136   57719 logs.go:276] 0 containers: []
	W0410 22:50:30.961147   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:30.961154   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:30.961213   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:31.001374   57719 cri.go:89] found id: ""
	I0410 22:50:31.001412   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.001427   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:31.001434   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:31.001498   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:31.038928   57719 cri.go:89] found id: ""
	I0410 22:50:31.038961   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.038971   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:31.038980   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:31.039057   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:31.077033   57719 cri.go:89] found id: ""
	I0410 22:50:31.077067   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.077076   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:31.077083   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:31.077139   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:31.115227   57719 cri.go:89] found id: ""
	I0410 22:50:31.115257   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.115266   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:31.115273   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:31.115335   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:31.157339   57719 cri.go:89] found id: ""
	I0410 22:50:31.157372   57719 logs.go:276] 0 containers: []
	W0410 22:50:31.157382   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:31.157393   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:31.157409   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:31.198742   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:31.198770   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:31.255388   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:31.255422   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:31.272018   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:31.272048   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:31.344503   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:31.344524   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:31.344541   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:33.925749   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:33.939402   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:33.939475   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:33.976070   57719 cri.go:89] found id: ""
	I0410 22:50:33.976093   57719 logs.go:276] 0 containers: []
	W0410 22:50:33.976100   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:33.976106   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:33.976172   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:34.013723   57719 cri.go:89] found id: ""
	I0410 22:50:34.013748   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.013758   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:34.013765   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:34.013821   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:34.062678   57719 cri.go:89] found id: ""
	I0410 22:50:34.062704   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.062712   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:34.062718   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:34.062774   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:34.123007   57719 cri.go:89] found id: ""
	I0410 22:50:34.123038   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.123046   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:34.123052   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:34.123096   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:34.188811   57719 cri.go:89] found id: ""
	I0410 22:50:34.188841   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.188852   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:34.188859   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:34.188949   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:34.223585   57719 cri.go:89] found id: ""
	I0410 22:50:34.223609   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.223618   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:34.223625   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:34.223680   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:34.260004   57719 cri.go:89] found id: ""
	I0410 22:50:34.260028   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.260036   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:34.260041   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:34.260096   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:34.303064   57719 cri.go:89] found id: ""
	I0410 22:50:34.303093   57719 logs.go:276] 0 containers: []
	W0410 22:50:34.303104   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:34.303115   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:34.303134   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:34.359105   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:34.359142   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:34.375420   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:34.375450   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:34.449619   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:34.449645   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:34.449660   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:34.534214   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:34.534248   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:34.518609   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:37.016973   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:32.649917   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:34.661652   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:37.150648   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:36.401988   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:38.901505   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:37.076525   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:37.090789   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:37.090849   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:37.130848   57719 cri.go:89] found id: ""
	I0410 22:50:37.130881   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.130893   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:37.130900   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:37.130967   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:37.170158   57719 cri.go:89] found id: ""
	I0410 22:50:37.170181   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.170188   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:37.170194   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:37.170269   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:37.210238   57719 cri.go:89] found id: ""
	I0410 22:50:37.210264   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.210274   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:37.210282   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:37.210328   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:37.256763   57719 cri.go:89] found id: ""
	I0410 22:50:37.256789   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.256800   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:37.256807   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:37.256875   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:37.295323   57719 cri.go:89] found id: ""
	I0410 22:50:37.295355   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.295364   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:37.295372   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:37.295443   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:37.334066   57719 cri.go:89] found id: ""
	I0410 22:50:37.334094   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.334105   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:37.334113   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:37.334170   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:37.374428   57719 cri.go:89] found id: ""
	I0410 22:50:37.374458   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.374477   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:37.374485   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:37.374544   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:37.412114   57719 cri.go:89] found id: ""
	I0410 22:50:37.412142   57719 logs.go:276] 0 containers: []
	W0410 22:50:37.412152   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:37.412161   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:37.412174   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:37.453693   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:37.453717   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:37.505484   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:37.505524   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:37.523645   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:37.523672   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:37.595107   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:37.595134   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:37.595150   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:40.180649   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:40.195168   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:40.195243   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:40.240130   57719 cri.go:89] found id: ""
	I0410 22:50:40.240160   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.240169   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:40.240175   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:40.240241   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:40.276366   57719 cri.go:89] found id: ""
	I0410 22:50:40.276390   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.276406   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:40.276412   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:40.276466   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:40.314991   57719 cri.go:89] found id: ""
	I0410 22:50:40.315016   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.315023   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:40.315029   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:40.315075   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:40.354301   57719 cri.go:89] found id: ""
	I0410 22:50:40.354331   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.354342   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:40.354349   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:40.354414   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:40.393093   57719 cri.go:89] found id: ""
	I0410 22:50:40.393125   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.393135   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:40.393143   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:40.393204   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:39.021170   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:41.518285   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:39.650047   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:42.150206   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:40.902024   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:42.904180   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:40.429641   57719 cri.go:89] found id: ""
	I0410 22:50:40.429665   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.429674   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:40.429680   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:40.429727   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:40.468184   57719 cri.go:89] found id: ""
	I0410 22:50:40.468213   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.468224   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:40.468232   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:40.468304   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:40.505586   57719 cri.go:89] found id: ""
	I0410 22:50:40.505616   57719 logs.go:276] 0 containers: []
	W0410 22:50:40.505627   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:40.505637   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:40.505652   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:40.562078   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:40.562119   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:40.578135   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:40.578213   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:40.659018   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:40.659047   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:40.659061   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:40.746434   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:40.746478   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:43.287852   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:43.301797   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:43.301869   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:43.339778   57719 cri.go:89] found id: ""
	I0410 22:50:43.339813   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.339822   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:43.339829   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:43.339893   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:43.378716   57719 cri.go:89] found id: ""
	I0410 22:50:43.378748   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.378759   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:43.378767   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:43.378836   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:43.417128   57719 cri.go:89] found id: ""
	I0410 22:50:43.417152   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.417163   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:43.417171   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:43.417234   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:43.459577   57719 cri.go:89] found id: ""
	I0410 22:50:43.459608   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.459617   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:43.459623   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:43.459678   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:43.497519   57719 cri.go:89] found id: ""
	I0410 22:50:43.497551   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.497561   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:43.497566   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:43.497620   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:43.534400   57719 cri.go:89] found id: ""
	I0410 22:50:43.534433   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.534444   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:43.534451   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:43.534540   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:43.574213   57719 cri.go:89] found id: ""
	I0410 22:50:43.574242   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.574253   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:43.574283   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:43.574344   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:43.611078   57719 cri.go:89] found id: ""
	I0410 22:50:43.611106   57719 logs.go:276] 0 containers: []
	W0410 22:50:43.611113   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:43.611121   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:43.611137   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:43.698166   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:43.698202   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:43.749368   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:43.749395   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:43.801584   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:43.801621   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:43.817012   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:43.817050   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:43.892325   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:43.518660   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:46.017804   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:44.650389   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:46.650560   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:45.401723   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:47.901852   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:46.393325   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:46.407985   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:46.408045   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:46.442704   57719 cri.go:89] found id: ""
	I0410 22:50:46.442735   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.442745   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:46.442753   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:46.442821   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:46.485582   57719 cri.go:89] found id: ""
	I0410 22:50:46.485611   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.485618   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:46.485625   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:46.485683   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:46.524199   57719 cri.go:89] found id: ""
	I0410 22:50:46.524227   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.524234   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:46.524240   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:46.524288   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:46.560655   57719 cri.go:89] found id: ""
	I0410 22:50:46.560685   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.560694   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:46.560701   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:46.560839   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:46.596617   57719 cri.go:89] found id: ""
	I0410 22:50:46.596646   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.596658   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:46.596666   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:46.596739   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:46.634316   57719 cri.go:89] found id: ""
	I0410 22:50:46.634339   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.634347   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:46.634352   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:46.634399   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:46.671466   57719 cri.go:89] found id: ""
	I0410 22:50:46.671493   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.671502   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:46.671509   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:46.671582   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:46.709228   57719 cri.go:89] found id: ""
	I0410 22:50:46.709254   57719 logs.go:276] 0 containers: []
	W0410 22:50:46.709265   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:46.709275   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:46.709291   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:46.761329   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:46.761366   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:46.778265   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:46.778288   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:46.851092   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:46.851113   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:46.851125   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:46.929181   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:46.929223   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:49.471285   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:49.485474   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:49.485551   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:49.523799   57719 cri.go:89] found id: ""
	I0410 22:50:49.523826   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.523838   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:49.523846   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:49.523899   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:49.562102   57719 cri.go:89] found id: ""
	I0410 22:50:49.562129   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.562137   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:49.562143   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:49.562196   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:49.600182   57719 cri.go:89] found id: ""
	I0410 22:50:49.600204   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.600211   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:49.600216   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:49.600262   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:49.640002   57719 cri.go:89] found id: ""
	I0410 22:50:49.640028   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.640039   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:49.640047   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:49.640111   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:49.678815   57719 cri.go:89] found id: ""
	I0410 22:50:49.678847   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.678858   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:49.678866   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:49.678929   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:49.716933   57719 cri.go:89] found id: ""
	I0410 22:50:49.716959   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.716969   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:49.716976   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:49.717039   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:49.756018   57719 cri.go:89] found id: ""
	I0410 22:50:49.756050   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.756060   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:49.756068   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:49.756132   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:49.802066   57719 cri.go:89] found id: ""
	I0410 22:50:49.802094   57719 logs.go:276] 0 containers: []
	W0410 22:50:49.802103   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:49.802110   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:49.802123   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:49.856363   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:49.856417   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:49.872297   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:49.872330   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:49.950152   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:49.950174   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:49.950185   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:50.031251   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:50.031291   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:48.517547   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:50.517942   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:49.150498   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:51.151491   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:50.401650   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:52.401866   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:52.574794   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:52.589052   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:52.589117   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:52.625911   57719 cri.go:89] found id: ""
	I0410 22:50:52.625941   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.625952   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:52.625960   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:52.626020   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:52.668749   57719 cri.go:89] found id: ""
	I0410 22:50:52.668773   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.668781   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:52.668787   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:52.668835   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:52.713420   57719 cri.go:89] found id: ""
	I0410 22:50:52.713447   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.713457   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:52.713473   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:52.713538   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:52.750265   57719 cri.go:89] found id: ""
	I0410 22:50:52.750294   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.750301   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:52.750307   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:52.750354   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:52.787552   57719 cri.go:89] found id: ""
	I0410 22:50:52.787586   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.787597   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:52.787604   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:52.787670   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:52.827988   57719 cri.go:89] found id: ""
	I0410 22:50:52.828013   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.828020   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:52.828026   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:52.828072   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:52.864115   57719 cri.go:89] found id: ""
	I0410 22:50:52.864144   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.864155   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:52.864161   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:52.864222   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:52.906673   57719 cri.go:89] found id: ""
	I0410 22:50:52.906702   57719 logs.go:276] 0 containers: []
	W0410 22:50:52.906712   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:52.906723   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:52.906742   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:52.960842   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:52.960892   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:52.976084   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:52.976114   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:53.052612   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:53.052638   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:53.052656   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:53.132465   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:53.132518   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:53.018789   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:55.518169   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:53.154117   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:55.653267   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:54.903797   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:57.401445   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:55.676947   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:55.691098   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:55.691183   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:55.728711   57719 cri.go:89] found id: ""
	I0410 22:50:55.728740   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.728750   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:55.728758   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:55.728824   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:55.768540   57719 cri.go:89] found id: ""
	I0410 22:50:55.768568   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.768578   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:55.768584   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:55.768649   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:55.806901   57719 cri.go:89] found id: ""
	I0410 22:50:55.806928   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.806938   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:55.806945   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:55.807019   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:55.846777   57719 cri.go:89] found id: ""
	I0410 22:50:55.846807   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.846816   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:55.846822   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:55.846873   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:55.887143   57719 cri.go:89] found id: ""
	I0410 22:50:55.887172   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.887181   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:55.887186   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:55.887241   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:55.929008   57719 cri.go:89] found id: ""
	I0410 22:50:55.929032   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.929040   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:55.929046   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:55.929098   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:55.969496   57719 cri.go:89] found id: ""
	I0410 22:50:55.969526   57719 logs.go:276] 0 containers: []
	W0410 22:50:55.969536   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:55.969544   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:55.969605   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:56.007786   57719 cri.go:89] found id: ""
	I0410 22:50:56.007818   57719 logs.go:276] 0 containers: []
	W0410 22:50:56.007828   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:56.007838   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:56.007854   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:56.061616   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:56.061653   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:56.078664   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:56.078689   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:56.165015   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:56.165037   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:56.165053   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:56.241928   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:56.241971   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:58.785955   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:50:58.799544   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:50:58.799604   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:50:58.837234   57719 cri.go:89] found id: ""
	I0410 22:50:58.837264   57719 logs.go:276] 0 containers: []
	W0410 22:50:58.837275   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:50:58.837283   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:50:58.837350   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:50:58.877818   57719 cri.go:89] found id: ""
	I0410 22:50:58.877854   57719 logs.go:276] 0 containers: []
	W0410 22:50:58.877861   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:50:58.877867   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:50:58.877921   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:50:58.919705   57719 cri.go:89] found id: ""
	I0410 22:50:58.919729   57719 logs.go:276] 0 containers: []
	W0410 22:50:58.919740   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:50:58.919747   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:50:58.919809   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:50:58.957995   57719 cri.go:89] found id: ""
	I0410 22:50:58.958020   57719 logs.go:276] 0 containers: []
	W0410 22:50:58.958029   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:50:58.958036   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:50:58.958091   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:50:58.999966   57719 cri.go:89] found id: ""
	I0410 22:50:58.999995   57719 logs.go:276] 0 containers: []
	W0410 22:50:59.000008   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:50:59.000016   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:50:59.000088   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:50:59.040516   57719 cri.go:89] found id: ""
	I0410 22:50:59.040541   57719 logs.go:276] 0 containers: []
	W0410 22:50:59.040552   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:50:59.040560   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:50:59.040623   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:50:59.078869   57719 cri.go:89] found id: ""
	I0410 22:50:59.078899   57719 logs.go:276] 0 containers: []
	W0410 22:50:59.078908   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:50:59.078913   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:50:59.078961   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:50:59.116637   57719 cri.go:89] found id: ""
	I0410 22:50:59.116663   57719 logs.go:276] 0 containers: []
	W0410 22:50:59.116670   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:50:59.116679   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:50:59.116697   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:50:59.195852   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:50:59.195892   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:50:59.243256   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:50:59.243282   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:50:59.299195   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:50:59.299263   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:50:59.314512   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:50:59.314537   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:50:59.386468   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:50:58.016995   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:00.018205   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:58.151543   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:00.650140   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:50:59.901858   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:01.902933   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:04.402128   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:01.886907   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:01.905169   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:01.905251   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:01.944154   57719 cri.go:89] found id: ""
	I0410 22:51:01.944187   57719 logs.go:276] 0 containers: []
	W0410 22:51:01.944198   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:01.944205   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:01.944268   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:01.982743   57719 cri.go:89] found id: ""
	I0410 22:51:01.982778   57719 logs.go:276] 0 containers: []
	W0410 22:51:01.982789   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:01.982797   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:01.982864   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:02.020072   57719 cri.go:89] found id: ""
	I0410 22:51:02.020094   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.020102   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:02.020159   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:02.020213   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:02.064250   57719 cri.go:89] found id: ""
	I0410 22:51:02.064273   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.064280   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:02.064286   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:02.064339   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:02.105013   57719 cri.go:89] found id: ""
	I0410 22:51:02.105045   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.105054   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:02.105060   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:02.105106   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:02.145664   57719 cri.go:89] found id: ""
	I0410 22:51:02.145689   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.145695   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:02.145701   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:02.145759   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:02.189752   57719 cri.go:89] found id: ""
	I0410 22:51:02.189831   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.189850   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:02.189857   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:02.189929   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:02.228315   57719 cri.go:89] found id: ""
	I0410 22:51:02.228347   57719 logs.go:276] 0 containers: []
	W0410 22:51:02.228358   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:02.228374   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:02.228390   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:02.281425   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:02.281460   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:02.296003   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:02.296031   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:02.389572   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:02.389599   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:02.389613   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:02.475881   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:02.475916   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:05.022037   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:05.037242   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:05.037304   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:05.073656   57719 cri.go:89] found id: ""
	I0410 22:51:05.073687   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.073698   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:05.073705   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:05.073767   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:05.114321   57719 cri.go:89] found id: ""
	I0410 22:51:05.114348   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.114356   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:05.114361   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:05.114430   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:05.153119   57719 cri.go:89] found id: ""
	I0410 22:51:05.153156   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.153164   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:05.153170   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:05.153230   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:05.193393   57719 cri.go:89] found id: ""
	I0410 22:51:05.193420   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.193428   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:05.193433   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:05.193479   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:05.229826   57719 cri.go:89] found id: ""
	I0410 22:51:05.229853   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.229861   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:05.229867   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:05.229915   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:05.265511   57719 cri.go:89] found id: ""
	I0410 22:51:05.265544   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.265555   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:05.265562   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:05.265627   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:05.302257   57719 cri.go:89] found id: ""
	I0410 22:51:05.302287   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.302297   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:05.302305   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:05.302386   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:05.347344   57719 cri.go:89] found id: ""
	I0410 22:51:05.347372   57719 logs.go:276] 0 containers: []
	W0410 22:51:05.347380   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:05.347388   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:05.347399   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:05.421796   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:05.421817   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:05.421829   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:02.521499   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:05.017660   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:07.017945   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:02.651104   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:05.150286   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:07.150565   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:06.402266   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:08.406456   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:05.501803   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:05.501839   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:05.549161   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:05.549195   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:05.599598   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:05.599633   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:08.115679   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:08.130273   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:08.130350   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:08.172302   57719 cri.go:89] found id: ""
	I0410 22:51:08.172328   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.172335   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:08.172342   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:08.172390   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:08.220789   57719 cri.go:89] found id: ""
	I0410 22:51:08.220812   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.220819   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:08.220825   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:08.220874   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:08.258299   57719 cri.go:89] found id: ""
	I0410 22:51:08.258328   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.258341   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:08.258349   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:08.258404   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:08.297698   57719 cri.go:89] found id: ""
	I0410 22:51:08.297726   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.297733   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:08.297739   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:08.297787   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:08.335564   57719 cri.go:89] found id: ""
	I0410 22:51:08.335595   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.335605   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:08.335613   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:08.335671   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:08.373340   57719 cri.go:89] found id: ""
	I0410 22:51:08.373367   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.373377   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:08.373384   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:08.373481   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:08.413961   57719 cri.go:89] found id: ""
	I0410 22:51:08.413984   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.413993   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:08.414001   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:08.414062   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:08.459449   57719 cri.go:89] found id: ""
	I0410 22:51:08.459481   57719 logs.go:276] 0 containers: []
	W0410 22:51:08.459492   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:08.459505   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:08.459521   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:08.518061   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:08.518103   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:08.533653   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:08.533680   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:08.619882   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:08.619917   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:08.619932   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:08.696329   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:08.696364   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:09.518298   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:11.518877   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:09.650387   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:11.650614   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:10.902634   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:13.402009   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:11.256846   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:11.271521   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:11.271582   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:11.312829   57719 cri.go:89] found id: ""
	I0410 22:51:11.312851   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.312869   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:11.312876   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:11.312930   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:11.355183   57719 cri.go:89] found id: ""
	I0410 22:51:11.355210   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.355220   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:11.355227   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:11.355287   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:11.394345   57719 cri.go:89] found id: ""
	I0410 22:51:11.394376   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.394388   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:11.394396   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:11.394460   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:11.434128   57719 cri.go:89] found id: ""
	I0410 22:51:11.434155   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.434163   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:11.434169   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:11.434219   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:11.473160   57719 cri.go:89] found id: ""
	I0410 22:51:11.473189   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.473201   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:11.473208   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:11.473278   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:11.513782   57719 cri.go:89] found id: ""
	I0410 22:51:11.513815   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.513826   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:11.513835   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:11.513891   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:11.556057   57719 cri.go:89] found id: ""
	I0410 22:51:11.556085   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.556093   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:11.556100   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:11.556147   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:11.594557   57719 cri.go:89] found id: ""
	I0410 22:51:11.594579   57719 logs.go:276] 0 containers: []
	W0410 22:51:11.594586   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:11.594594   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:11.594609   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:11.672795   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:11.672841   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:11.716011   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:11.716046   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:11.769372   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:11.769413   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:11.784589   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:11.784617   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:11.857051   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:14.358019   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:14.372116   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:14.372192   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:14.412020   57719 cri.go:89] found id: ""
	I0410 22:51:14.412049   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.412061   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:14.412068   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:14.412128   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:14.450317   57719 cri.go:89] found id: ""
	I0410 22:51:14.450349   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.450360   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:14.450368   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:14.450426   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:14.509080   57719 cri.go:89] found id: ""
	I0410 22:51:14.509104   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.509110   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:14.509116   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:14.509185   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:14.561540   57719 cri.go:89] found id: ""
	I0410 22:51:14.561572   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.561583   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:14.561590   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:14.561670   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:14.622498   57719 cri.go:89] found id: ""
	I0410 22:51:14.622528   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.622538   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:14.622546   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:14.622606   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:14.678451   57719 cri.go:89] found id: ""
	I0410 22:51:14.678481   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.678490   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:14.678498   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:14.678560   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:14.720264   57719 cri.go:89] found id: ""
	I0410 22:51:14.720302   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.720315   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:14.720323   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:14.720388   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:14.758039   57719 cri.go:89] found id: ""
	I0410 22:51:14.758063   57719 logs.go:276] 0 containers: []
	W0410 22:51:14.758071   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:14.758079   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:14.758090   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:14.808111   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:14.808171   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:14.825444   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:14.825487   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:14.906859   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:14.906884   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:14.906899   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:14.995176   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:14.995225   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:14.017397   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:16.017624   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:14.149898   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:16.150320   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:15.901542   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:17.902391   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:17.541159   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:17.556679   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:17.556749   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:17.595839   57719 cri.go:89] found id: ""
	I0410 22:51:17.595869   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.595880   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:17.595895   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:17.595954   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:17.633921   57719 cri.go:89] found id: ""
	I0410 22:51:17.633947   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.633957   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:17.633964   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:17.634033   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:17.673467   57719 cri.go:89] found id: ""
	I0410 22:51:17.673493   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.673501   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:17.673507   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:17.673554   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:17.709631   57719 cri.go:89] found id: ""
	I0410 22:51:17.709660   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.709670   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:17.709679   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:17.709739   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:17.760852   57719 cri.go:89] found id: ""
	I0410 22:51:17.760880   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.760893   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:17.760908   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:17.760969   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:17.798074   57719 cri.go:89] found id: ""
	I0410 22:51:17.798099   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.798108   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:17.798117   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:17.798178   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:17.835807   57719 cri.go:89] found id: ""
	I0410 22:51:17.835839   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.835854   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:17.835863   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:17.835935   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:17.876812   57719 cri.go:89] found id: ""
	I0410 22:51:17.876846   57719 logs.go:276] 0 containers: []
	W0410 22:51:17.876856   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:17.876868   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:17.876882   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:17.891121   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:17.891149   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:17.966241   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:17.966264   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:17.966277   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:18.042633   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:18.042667   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:18.088294   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:18.088327   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:18.518103   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:20.519397   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:18.650784   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:21.150770   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:20.403127   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:22.901329   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:20.647016   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:20.662573   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:20.662640   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:20.701147   57719 cri.go:89] found id: ""
	I0410 22:51:20.701173   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.701184   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:20.701191   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:20.701252   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:20.739005   57719 cri.go:89] found id: ""
	I0410 22:51:20.739038   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.739049   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:20.739057   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:20.739112   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:20.776335   57719 cri.go:89] found id: ""
	I0410 22:51:20.776365   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.776379   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:20.776386   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:20.776471   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:20.814755   57719 cri.go:89] found id: ""
	I0410 22:51:20.814789   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.814800   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:20.814808   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:20.814867   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:20.853872   57719 cri.go:89] found id: ""
	I0410 22:51:20.853897   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.853904   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:20.853910   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:20.853958   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:20.891616   57719 cri.go:89] found id: ""
	I0410 22:51:20.891648   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.891656   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:20.891662   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:20.891710   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:20.930285   57719 cri.go:89] found id: ""
	I0410 22:51:20.930316   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.930326   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:20.930341   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:20.930398   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:20.967857   57719 cri.go:89] found id: ""
	I0410 22:51:20.967894   57719 logs.go:276] 0 containers: []
	W0410 22:51:20.967904   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:20.967913   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:20.967934   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:21.053166   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:21.053201   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:21.098860   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:21.098888   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:21.150395   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:21.150430   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:21.164707   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:21.164737   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:21.251010   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:23.751441   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:23.769949   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:23.770014   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:23.809652   57719 cri.go:89] found id: ""
	I0410 22:51:23.809678   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.809686   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:23.809692   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:23.809740   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:23.847331   57719 cri.go:89] found id: ""
	I0410 22:51:23.847364   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.847374   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:23.847383   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:23.847445   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:23.889459   57719 cri.go:89] found id: ""
	I0410 22:51:23.889488   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.889498   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:23.889505   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:23.889564   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:23.932683   57719 cri.go:89] found id: ""
	I0410 22:51:23.932712   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.932720   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:23.932727   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:23.932787   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:23.974161   57719 cri.go:89] found id: ""
	I0410 22:51:23.974187   57719 logs.go:276] 0 containers: []
	W0410 22:51:23.974194   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:23.974200   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:23.974253   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:24.013058   57719 cri.go:89] found id: ""
	I0410 22:51:24.013087   57719 logs.go:276] 0 containers: []
	W0410 22:51:24.013098   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:24.013106   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:24.013169   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:24.052556   57719 cri.go:89] found id: ""
	I0410 22:51:24.052582   57719 logs.go:276] 0 containers: []
	W0410 22:51:24.052590   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:24.052596   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:24.052643   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:24.089940   57719 cri.go:89] found id: ""
	I0410 22:51:24.089967   57719 logs.go:276] 0 containers: []
	W0410 22:51:24.089974   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:24.089982   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:24.089992   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:24.133198   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:24.133226   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:24.186615   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:24.186651   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:24.200559   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:24.200586   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:24.277061   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:24.277093   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:24.277109   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:23.016887   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:25.018325   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:27.018514   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:23.650669   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:26.149198   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:24.901704   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:26.902227   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:28.902337   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:26.855354   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:26.870269   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:26.870329   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:26.910056   57719 cri.go:89] found id: ""
	I0410 22:51:26.910084   57719 logs.go:276] 0 containers: []
	W0410 22:51:26.910094   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:26.910101   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:26.910163   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:26.949646   57719 cri.go:89] found id: ""
	I0410 22:51:26.949674   57719 logs.go:276] 0 containers: []
	W0410 22:51:26.949684   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:26.949690   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:26.949759   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:26.990945   57719 cri.go:89] found id: ""
	I0410 22:51:26.990970   57719 logs.go:276] 0 containers: []
	W0410 22:51:26.990977   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:26.990984   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:26.991053   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:27.029464   57719 cri.go:89] found id: ""
	I0410 22:51:27.029491   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.029500   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:27.029505   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:27.029562   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:27.072194   57719 cri.go:89] found id: ""
	I0410 22:51:27.072235   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.072260   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:27.072270   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:27.072339   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:27.106942   57719 cri.go:89] found id: ""
	I0410 22:51:27.106969   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.106979   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:27.106985   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:27.107045   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:27.144851   57719 cri.go:89] found id: ""
	I0410 22:51:27.144885   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.144894   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:27.144909   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:27.144970   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:27.188138   57719 cri.go:89] found id: ""
	I0410 22:51:27.188166   57719 logs.go:276] 0 containers: []
	W0410 22:51:27.188178   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:27.188189   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:27.188204   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:27.241911   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:27.241943   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:27.255296   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:27.255322   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:27.327638   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:27.327663   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:27.327678   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:27.409048   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:27.409083   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:29.960093   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:29.975583   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:29.975647   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:30.018120   57719 cri.go:89] found id: ""
	I0410 22:51:30.018149   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.018159   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:30.018166   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:30.018225   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:30.055487   57719 cri.go:89] found id: ""
	I0410 22:51:30.055511   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.055518   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:30.055524   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:30.055573   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:30.093723   57719 cri.go:89] found id: ""
	I0410 22:51:30.093749   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.093756   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:30.093761   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:30.093808   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:30.138278   57719 cri.go:89] found id: ""
	I0410 22:51:30.138306   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.138317   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:30.138324   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:30.138385   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:30.174454   57719 cri.go:89] found id: ""
	I0410 22:51:30.174484   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.174495   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:30.174502   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:30.174573   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:30.213189   57719 cri.go:89] found id: ""
	I0410 22:51:30.213214   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.213221   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:30.213227   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:30.213272   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:30.253264   57719 cri.go:89] found id: ""
	I0410 22:51:30.253294   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.253304   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:30.253309   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:30.253357   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:30.289729   57719 cri.go:89] found id: ""
	I0410 22:51:30.289755   57719 logs.go:276] 0 containers: []
	W0410 22:51:30.289767   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:30.289777   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:30.289793   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:30.303387   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:30.303416   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:30.381294   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:30.381315   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:30.381331   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:29.019226   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:31.519681   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:28.150621   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:30.649807   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:30.903662   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:33.401827   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:30.468072   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:30.468110   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:30.508761   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:30.508794   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:33.061654   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:33.077072   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:33.077146   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:33.113753   57719 cri.go:89] found id: ""
	I0410 22:51:33.113781   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.113791   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:33.113798   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:33.113848   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:33.149212   57719 cri.go:89] found id: ""
	I0410 22:51:33.149238   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.149249   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:33.149256   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:33.149321   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:33.185619   57719 cri.go:89] found id: ""
	I0410 22:51:33.185649   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.185659   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:33.185667   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:33.185725   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:33.222270   57719 cri.go:89] found id: ""
	I0410 22:51:33.222301   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.222313   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:33.222320   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:33.222375   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:33.258594   57719 cri.go:89] found id: ""
	I0410 22:51:33.258624   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.258636   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:33.258642   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:33.258689   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:33.298326   57719 cri.go:89] found id: ""
	I0410 22:51:33.298360   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.298368   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:33.298374   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:33.298438   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:33.337407   57719 cri.go:89] found id: ""
	I0410 22:51:33.337438   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.337449   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:33.337456   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:33.337520   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:33.374971   57719 cri.go:89] found id: ""
	I0410 22:51:33.375003   57719 logs.go:276] 0 containers: []
	W0410 22:51:33.375014   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:33.375024   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:33.375039   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:33.415256   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:33.415288   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:33.467895   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:33.467929   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:33.484604   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:33.484639   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:33.562267   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:33.562288   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:33.562299   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:34.017685   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:36.519093   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:32.650396   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:35.150200   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:35.902810   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:38.401463   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:36.142628   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:36.157825   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:36.157883   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:36.199418   57719 cri.go:89] found id: ""
	I0410 22:51:36.199446   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.199456   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:36.199463   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:36.199523   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:36.238136   57719 cri.go:89] found id: ""
	I0410 22:51:36.238166   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.238174   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:36.238180   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:36.238229   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:36.273995   57719 cri.go:89] found id: ""
	I0410 22:51:36.274026   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.274037   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:36.274049   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:36.274110   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:36.311007   57719 cri.go:89] found id: ""
	I0410 22:51:36.311039   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.311049   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:36.311057   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:36.311122   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:36.351062   57719 cri.go:89] found id: ""
	I0410 22:51:36.351086   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.351093   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:36.351099   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:36.351152   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:36.388660   57719 cri.go:89] found id: ""
	I0410 22:51:36.388689   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.388703   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:36.388711   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:36.388762   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:36.428715   57719 cri.go:89] found id: ""
	I0410 22:51:36.428753   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.428761   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:36.428767   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:36.428831   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:36.467186   57719 cri.go:89] found id: ""
	I0410 22:51:36.467213   57719 logs.go:276] 0 containers: []
	W0410 22:51:36.467220   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:36.467228   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:36.467239   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:36.521831   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:36.521860   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:36.536929   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:36.536957   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:36.614624   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:36.614647   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:36.614659   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:36.694604   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:36.694646   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:39.240039   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:39.255177   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:39.255262   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:39.293063   57719 cri.go:89] found id: ""
	I0410 22:51:39.293091   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.293113   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:39.293120   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:39.293181   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:39.331603   57719 cri.go:89] found id: ""
	I0410 22:51:39.331631   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.331639   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:39.331645   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:39.331697   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:39.372881   57719 cri.go:89] found id: ""
	I0410 22:51:39.372908   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.372919   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:39.372926   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:39.372987   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:39.417399   57719 cri.go:89] found id: ""
	I0410 22:51:39.417425   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.417435   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:39.417442   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:39.417503   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:39.458836   57719 cri.go:89] found id: ""
	I0410 22:51:39.458868   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.458877   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:39.458882   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:39.458932   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:39.496436   57719 cri.go:89] found id: ""
	I0410 22:51:39.496460   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.496467   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:39.496474   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:39.496532   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:39.534649   57719 cri.go:89] found id: ""
	I0410 22:51:39.534681   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.534690   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:39.534695   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:39.534754   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:39.571677   57719 cri.go:89] found id: ""
	I0410 22:51:39.571698   57719 logs.go:276] 0 containers: []
	W0410 22:51:39.571705   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:39.571714   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:39.571725   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:39.621445   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:39.621482   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:39.676341   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:39.676382   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:39.691543   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:39.691573   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:39.769452   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:39.769477   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:39.769493   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:39.017483   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:41.020027   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:37.651534   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:40.151404   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:40.401635   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:42.401931   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:44.401972   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:42.350823   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:42.367124   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:42.367199   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:42.407511   57719 cri.go:89] found id: ""
	I0410 22:51:42.407545   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.407554   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:42.407560   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:42.407622   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:42.442913   57719 cri.go:89] found id: ""
	I0410 22:51:42.442948   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.442958   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:42.442964   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:42.443027   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:42.480747   57719 cri.go:89] found id: ""
	I0410 22:51:42.480777   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.480786   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:42.480792   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:42.480846   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:42.521610   57719 cri.go:89] found id: ""
	I0410 22:51:42.521635   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.521644   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:42.521651   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:42.521698   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:42.561076   57719 cri.go:89] found id: ""
	I0410 22:51:42.561108   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.561119   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:42.561127   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:42.561189   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:42.598034   57719 cri.go:89] found id: ""
	I0410 22:51:42.598059   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.598066   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:42.598072   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:42.598129   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:42.637051   57719 cri.go:89] found id: ""
	I0410 22:51:42.637085   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.637095   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:42.637103   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:42.637162   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:42.676051   57719 cri.go:89] found id: ""
	I0410 22:51:42.676084   57719 logs.go:276] 0 containers: []
	W0410 22:51:42.676094   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:42.676105   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:42.676120   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:42.719607   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:42.719634   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:42.770791   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:42.770829   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:42.785704   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:42.785730   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:42.876445   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:42.876475   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:42.876490   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:43.518453   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:46.019450   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:42.650486   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:44.650894   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:47.150370   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:46.901358   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:48.902417   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:45.458721   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:45.474125   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:45.474203   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:45.511105   57719 cri.go:89] found id: ""
	I0410 22:51:45.511143   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.511153   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:45.511161   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:45.511220   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:45.552891   57719 cri.go:89] found id: ""
	I0410 22:51:45.552916   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.552924   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:45.552930   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:45.552986   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:45.592423   57719 cri.go:89] found id: ""
	I0410 22:51:45.592458   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.592474   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:45.592481   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:45.592542   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:45.630964   57719 cri.go:89] found id: ""
	I0410 22:51:45.631009   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.631026   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:45.631033   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:45.631098   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:45.669557   57719 cri.go:89] found id: ""
	I0410 22:51:45.669586   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.669595   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:45.669602   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:45.669702   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:45.706359   57719 cri.go:89] found id: ""
	I0410 22:51:45.706387   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.706395   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:45.706402   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:45.706463   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:45.743301   57719 cri.go:89] found id: ""
	I0410 22:51:45.743330   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.743337   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:45.743343   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:45.743390   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:45.781679   57719 cri.go:89] found id: ""
	I0410 22:51:45.781703   57719 logs.go:276] 0 containers: []
	W0410 22:51:45.781711   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:45.781718   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:45.781730   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:45.835251   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:45.835286   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:45.849255   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:45.849284   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:45.918404   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:45.918436   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:45.918452   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:45.999556   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:45.999591   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:48.546421   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:48.561243   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:48.561314   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:48.618335   57719 cri.go:89] found id: ""
	I0410 22:51:48.618361   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.618369   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:48.618375   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:48.618445   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:48.656116   57719 cri.go:89] found id: ""
	I0410 22:51:48.656151   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.656160   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:48.656167   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:48.656222   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:48.694846   57719 cri.go:89] found id: ""
	I0410 22:51:48.694874   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.694884   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:48.694897   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:48.694971   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:48.731988   57719 cri.go:89] found id: ""
	I0410 22:51:48.732020   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.732031   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:48.732039   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:48.732102   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:48.768595   57719 cri.go:89] found id: ""
	I0410 22:51:48.768627   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.768636   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:48.768643   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:48.768708   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:48.807263   57719 cri.go:89] found id: ""
	I0410 22:51:48.807292   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.807302   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:48.807308   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:48.807366   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:48.845291   57719 cri.go:89] found id: ""
	I0410 22:51:48.845317   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.845325   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:48.845329   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:48.845399   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:48.891056   57719 cri.go:89] found id: ""
	I0410 22:51:48.891081   57719 logs.go:276] 0 containers: []
	W0410 22:51:48.891091   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:48.891102   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:48.891117   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:48.931963   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:48.931992   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:48.985539   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:48.985579   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:49.000685   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:49.000716   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:49.076097   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:49.076127   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:49.076143   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:48.517879   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:51.018479   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:49.150511   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:51.650519   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:51.400971   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:53.401596   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:51.663336   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:51.678249   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:51.678315   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:51.720062   57719 cri.go:89] found id: ""
	I0410 22:51:51.720088   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.720096   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:51.720103   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:51.720164   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:51.766351   57719 cri.go:89] found id: ""
	I0410 22:51:51.766387   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.766395   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:51.766401   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:51.766448   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:51.813037   57719 cri.go:89] found id: ""
	I0410 22:51:51.813068   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.813080   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:51.813087   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:51.813150   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:51.849232   57719 cri.go:89] found id: ""
	I0410 22:51:51.849262   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.849273   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:51.849280   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:51.849346   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:51.886392   57719 cri.go:89] found id: ""
	I0410 22:51:51.886415   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.886422   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:51.886428   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:51.886485   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:51.930859   57719 cri.go:89] found id: ""
	I0410 22:51:51.930896   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.930905   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:51.930913   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:51.930978   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:51.970403   57719 cri.go:89] found id: ""
	I0410 22:51:51.970501   57719 logs.go:276] 0 containers: []
	W0410 22:51:51.970524   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:51.970533   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:51.970599   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:52.008281   57719 cri.go:89] found id: ""
	I0410 22:51:52.008311   57719 logs.go:276] 0 containers: []
	W0410 22:51:52.008322   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:52.008333   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:52.008347   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:52.060623   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:52.060656   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:52.075529   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:52.075559   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:52.158330   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:52.158356   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:52.158371   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:52.236356   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:52.236392   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:54.782448   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:54.796928   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:54.796997   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:54.836297   57719 cri.go:89] found id: ""
	I0410 22:51:54.836326   57719 logs.go:276] 0 containers: []
	W0410 22:51:54.836335   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:54.836341   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:54.836390   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:54.873501   57719 cri.go:89] found id: ""
	I0410 22:51:54.873532   57719 logs.go:276] 0 containers: []
	W0410 22:51:54.873540   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:54.873547   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:54.873617   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:54.914200   57719 cri.go:89] found id: ""
	I0410 22:51:54.914227   57719 logs.go:276] 0 containers: []
	W0410 22:51:54.914238   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:54.914247   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:54.914308   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:54.958654   57719 cri.go:89] found id: ""
	I0410 22:51:54.958682   57719 logs.go:276] 0 containers: []
	W0410 22:51:54.958693   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:54.958702   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:54.958761   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:55.017032   57719 cri.go:89] found id: ""
	I0410 22:51:55.017078   57719 logs.go:276] 0 containers: []
	W0410 22:51:55.017090   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:55.017101   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:55.017167   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:55.093024   57719 cri.go:89] found id: ""
	I0410 22:51:55.093059   57719 logs.go:276] 0 containers: []
	W0410 22:51:55.093070   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:55.093085   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:55.093156   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:55.142412   57719 cri.go:89] found id: ""
	I0410 22:51:55.142441   57719 logs.go:276] 0 containers: []
	W0410 22:51:55.142456   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:55.142464   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:55.142521   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:55.180116   57719 cri.go:89] found id: ""
	I0410 22:51:55.180147   57719 logs.go:276] 0 containers: []
	W0410 22:51:55.180159   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:55.180169   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:55.180186   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:55.249118   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:55.249139   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:55.249153   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:55.327558   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:55.327597   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:55.373127   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:55.373163   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:53.518589   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:56.017080   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:54.151372   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:56.650238   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:55.401716   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:57.902174   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:55.431602   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:55.431647   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:57.947559   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:51:57.962916   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:51:57.962983   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:51:58.000955   57719 cri.go:89] found id: ""
	I0410 22:51:58.000983   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.000990   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:51:58.000997   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:51:58.001049   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:51:58.040556   57719 cri.go:89] found id: ""
	I0410 22:51:58.040579   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.040586   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:51:58.040592   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:51:58.040649   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:51:58.079121   57719 cri.go:89] found id: ""
	I0410 22:51:58.079148   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.079155   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:51:58.079161   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:51:58.079240   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:51:58.119876   57719 cri.go:89] found id: ""
	I0410 22:51:58.119902   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.119914   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:51:58.119929   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:51:58.119987   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:51:58.160130   57719 cri.go:89] found id: ""
	I0410 22:51:58.160162   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.160173   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:51:58.160181   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:51:58.160258   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:51:58.198162   57719 cri.go:89] found id: ""
	I0410 22:51:58.198195   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.198207   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:51:58.198215   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:51:58.198266   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:51:58.235049   57719 cri.go:89] found id: ""
	I0410 22:51:58.235078   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.235089   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:51:58.235096   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:51:58.235157   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:51:58.275786   57719 cri.go:89] found id: ""
	I0410 22:51:58.275825   57719 logs.go:276] 0 containers: []
	W0410 22:51:58.275845   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:51:58.275856   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:51:58.275872   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:51:58.316246   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:51:58.316277   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:51:58.371614   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:51:58.371649   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:51:58.386610   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:51:58.386646   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:51:58.465167   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:51:58.465187   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:51:58.465199   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:51:58.018362   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:00.517710   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:51:59.152119   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:01.650566   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:00.401148   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:02.401494   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:04.401624   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:01.049405   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:01.073251   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:01.073328   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:01.125169   57719 cri.go:89] found id: ""
	I0410 22:52:01.125201   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.125212   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:01.125220   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:01.125289   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:01.171256   57719 cri.go:89] found id: ""
	I0410 22:52:01.171289   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.171300   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:01.171308   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:01.171376   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:01.210444   57719 cri.go:89] found id: ""
	I0410 22:52:01.210478   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.210489   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:01.210503   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:01.210568   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:01.252448   57719 cri.go:89] found id: ""
	I0410 22:52:01.252473   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.252480   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:01.252486   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:01.252531   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:01.293084   57719 cri.go:89] found id: ""
	I0410 22:52:01.293117   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.293128   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:01.293136   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:01.293208   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:01.330992   57719 cri.go:89] found id: ""
	I0410 22:52:01.331019   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.331026   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:01.331032   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:01.331081   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:01.369286   57719 cri.go:89] found id: ""
	I0410 22:52:01.369315   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.369325   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:01.369331   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:01.369378   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:01.409888   57719 cri.go:89] found id: ""
	I0410 22:52:01.409916   57719 logs.go:276] 0 containers: []
	W0410 22:52:01.409924   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:01.409933   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:01.409944   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:01.484535   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:01.484557   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:01.484569   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:01.565727   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:01.565778   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:01.606987   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:01.607018   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:01.659492   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:01.659529   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:04.174971   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:04.190302   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:04.190382   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:04.230050   57719 cri.go:89] found id: ""
	I0410 22:52:04.230080   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.230090   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:04.230097   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:04.230162   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:04.269870   57719 cri.go:89] found id: ""
	I0410 22:52:04.269902   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.269908   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:04.269914   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:04.269969   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:04.310977   57719 cri.go:89] found id: ""
	I0410 22:52:04.311008   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.311019   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:04.311026   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:04.311096   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:04.349108   57719 cri.go:89] found id: ""
	I0410 22:52:04.349136   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.349147   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:04.349154   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:04.349216   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:04.389590   57719 cri.go:89] found id: ""
	I0410 22:52:04.389613   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.389625   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:04.389633   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:04.389697   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:04.432962   57719 cri.go:89] found id: ""
	I0410 22:52:04.432989   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.433001   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:04.433008   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:04.433070   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:04.473912   57719 cri.go:89] found id: ""
	I0410 22:52:04.473946   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.473955   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:04.473960   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:04.474029   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:04.516157   57719 cri.go:89] found id: ""
	I0410 22:52:04.516182   57719 logs.go:276] 0 containers: []
	W0410 22:52:04.516192   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:04.516203   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:04.516218   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:04.569047   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:04.569082   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:04.622639   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:04.622673   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:04.638441   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:04.638470   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:04.718203   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:04.718227   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:04.718241   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:02.518104   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:04.519509   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:06.519648   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:04.150041   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:06.150157   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:06.902111   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:08.902816   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:07.302147   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:07.315919   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:07.315984   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:07.354692   57719 cri.go:89] found id: ""
	I0410 22:52:07.354723   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.354733   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:07.354740   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:07.354803   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:07.393418   57719 cri.go:89] found id: ""
	I0410 22:52:07.393447   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.393459   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:07.393466   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:07.393525   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:07.436810   57719 cri.go:89] found id: ""
	I0410 22:52:07.436837   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.436847   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:07.436855   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:07.436920   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:07.478685   57719 cri.go:89] found id: ""
	I0410 22:52:07.478709   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.478720   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:07.478735   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:07.478792   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:07.515699   57719 cri.go:89] found id: ""
	I0410 22:52:07.515727   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.515737   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:07.515744   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:07.515805   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:07.556419   57719 cri.go:89] found id: ""
	I0410 22:52:07.556443   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.556451   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:07.556457   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:07.556560   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:07.598076   57719 cri.go:89] found id: ""
	I0410 22:52:07.598106   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.598113   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:07.598119   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:07.598183   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:07.637778   57719 cri.go:89] found id: ""
	I0410 22:52:07.637814   57719 logs.go:276] 0 containers: []
	W0410 22:52:07.637826   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:07.637839   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:07.637854   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:07.693688   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:07.693728   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:07.709256   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:07.709289   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:07.778519   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:07.778544   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:07.778584   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:07.858937   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:07.858973   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:10.405765   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:10.422019   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:10.422083   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:09.017771   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:11.017883   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:08.151568   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:10.650989   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:11.402181   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:13.902520   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:10.463779   57719 cri.go:89] found id: ""
	I0410 22:52:10.463818   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.463829   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:10.463836   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:10.463923   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:10.503680   57719 cri.go:89] found id: ""
	I0410 22:52:10.503710   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.503718   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:10.503736   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:10.503804   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:10.545567   57719 cri.go:89] found id: ""
	I0410 22:52:10.545594   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.545605   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:10.545613   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:10.545671   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:10.590864   57719 cri.go:89] found id: ""
	I0410 22:52:10.590892   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.590901   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:10.590908   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:10.590968   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:10.634628   57719 cri.go:89] found id: ""
	I0410 22:52:10.634659   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.634670   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:10.634677   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:10.634758   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:10.681477   57719 cri.go:89] found id: ""
	I0410 22:52:10.681507   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.681526   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:10.681533   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:10.681585   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:10.725203   57719 cri.go:89] found id: ""
	I0410 22:52:10.725229   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.725328   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:10.725368   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:10.725443   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:10.764994   57719 cri.go:89] found id: ""
	I0410 22:52:10.765028   57719 logs.go:276] 0 containers: []
	W0410 22:52:10.765036   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:10.765044   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:10.765094   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:10.808981   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:10.809012   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:10.866429   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:10.866468   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:10.882512   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:10.882537   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:10.963016   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:10.963041   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:10.963053   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:13.544552   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:13.558161   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:13.558238   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:13.596945   57719 cri.go:89] found id: ""
	I0410 22:52:13.596977   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.596988   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:13.596996   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:13.597057   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:13.637920   57719 cri.go:89] found id: ""
	I0410 22:52:13.637944   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.637951   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:13.637958   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:13.638012   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:13.676777   57719 cri.go:89] found id: ""
	I0410 22:52:13.676808   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.676819   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:13.676826   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:13.676887   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:13.714054   57719 cri.go:89] found id: ""
	I0410 22:52:13.714078   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.714086   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:13.714091   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:13.714142   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:13.757162   57719 cri.go:89] found id: ""
	I0410 22:52:13.757194   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.757206   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:13.757214   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:13.757276   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:13.793578   57719 cri.go:89] found id: ""
	I0410 22:52:13.793616   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.793629   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:13.793636   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:13.793697   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:13.831307   57719 cri.go:89] found id: ""
	I0410 22:52:13.831336   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.831346   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:13.831353   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:13.831400   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:13.872072   57719 cri.go:89] found id: ""
	I0410 22:52:13.872109   57719 logs.go:276] 0 containers: []
	W0410 22:52:13.872117   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:13.872127   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:13.872143   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:13.926909   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:13.926947   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:13.943095   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:13.943126   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:14.015301   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:14.015336   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:14.015351   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:14.101100   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:14.101137   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:13.019599   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:15.517932   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:13.150248   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:15.650269   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:16.401396   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:18.402384   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:16.650213   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:16.664603   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:16.664677   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:16.701498   57719 cri.go:89] found id: ""
	I0410 22:52:16.701527   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.701539   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:16.701547   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:16.701618   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:16.740687   57719 cri.go:89] found id: ""
	I0410 22:52:16.740716   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.740725   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:16.740730   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:16.740789   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:16.777349   57719 cri.go:89] found id: ""
	I0410 22:52:16.777372   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.777380   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:16.777385   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:16.777454   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:16.819855   57719 cri.go:89] found id: ""
	I0410 22:52:16.819890   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.819900   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:16.819909   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:16.819973   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:16.859939   57719 cri.go:89] found id: ""
	I0410 22:52:16.859970   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.859981   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:16.859991   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:16.860056   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:16.897861   57719 cri.go:89] found id: ""
	I0410 22:52:16.897886   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.897893   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:16.897899   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:16.897962   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:16.935642   57719 cri.go:89] found id: ""
	I0410 22:52:16.935673   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.935681   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:16.935687   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:16.935733   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:16.974268   57719 cri.go:89] found id: ""
	I0410 22:52:16.974294   57719 logs.go:276] 0 containers: []
	W0410 22:52:16.974302   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:16.974311   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:16.974327   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:17.027850   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:17.027888   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:17.043343   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:17.043379   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:17.120945   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:17.120967   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:17.120979   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:17.204831   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:17.204868   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:19.749712   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:19.764102   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:19.764181   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:19.800759   57719 cri.go:89] found id: ""
	I0410 22:52:19.800787   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.800795   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:19.800801   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:19.800851   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:19.839678   57719 cri.go:89] found id: ""
	I0410 22:52:19.839711   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.839723   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:19.839730   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:19.839791   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:19.876983   57719 cri.go:89] found id: ""
	I0410 22:52:19.877007   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.877015   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:19.877020   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:19.877081   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:19.918139   57719 cri.go:89] found id: ""
	I0410 22:52:19.918167   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.918177   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:19.918186   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:19.918243   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:19.954770   57719 cri.go:89] found id: ""
	I0410 22:52:19.954808   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.954818   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:19.954825   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:19.954881   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:19.993643   57719 cri.go:89] found id: ""
	I0410 22:52:19.993670   57719 logs.go:276] 0 containers: []
	W0410 22:52:19.993680   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:19.993687   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:19.993746   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:20.030466   57719 cri.go:89] found id: ""
	I0410 22:52:20.030494   57719 logs.go:276] 0 containers: []
	W0410 22:52:20.030503   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:20.030510   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:20.030575   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:20.069264   57719 cri.go:89] found id: ""
	I0410 22:52:20.069291   57719 logs.go:276] 0 containers: []
	W0410 22:52:20.069299   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:20.069307   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:20.069318   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:20.117354   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:20.117382   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:20.170758   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:20.170800   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:20.187014   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:20.187055   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:20.269620   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:20.269645   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:20.269661   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:17.518440   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:20.018602   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:18.151102   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:20.151664   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:20.901836   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:23.401655   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:22.844841   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:22.861923   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:22.861983   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:22.907972   57719 cri.go:89] found id: ""
	I0410 22:52:22.908000   57719 logs.go:276] 0 containers: []
	W0410 22:52:22.908010   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:22.908017   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:22.908081   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:22.949822   57719 cri.go:89] found id: ""
	I0410 22:52:22.949851   57719 logs.go:276] 0 containers: []
	W0410 22:52:22.949861   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:22.949869   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:22.949935   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:22.989872   57719 cri.go:89] found id: ""
	I0410 22:52:22.989895   57719 logs.go:276] 0 containers: []
	W0410 22:52:22.989902   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:22.989908   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:22.989959   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:23.031881   57719 cri.go:89] found id: ""
	I0410 22:52:23.031900   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.031908   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:23.031913   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:23.031978   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:23.071691   57719 cri.go:89] found id: ""
	I0410 22:52:23.071719   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.071726   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:23.071732   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:23.071792   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:23.109961   57719 cri.go:89] found id: ""
	I0410 22:52:23.109990   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.110001   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:23.110009   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:23.110069   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:23.152955   57719 cri.go:89] found id: ""
	I0410 22:52:23.152979   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.152986   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:23.152991   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:23.153054   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:23.191883   57719 cri.go:89] found id: ""
	I0410 22:52:23.191924   57719 logs.go:276] 0 containers: []
	W0410 22:52:23.191935   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:23.191947   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:23.191959   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:23.232692   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:23.232731   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:23.283648   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:23.283684   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:23.297701   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:23.297729   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:23.381657   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:23.381673   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:23.381685   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:22.520899   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:25.016955   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:27.018541   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:22.650053   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:25.150370   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:25.402084   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:27.402670   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:25.961531   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:25.977539   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:25.977639   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:26.021844   57719 cri.go:89] found id: ""
	I0410 22:52:26.021875   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.021886   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:26.021893   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:26.021954   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:26.064286   57719 cri.go:89] found id: ""
	I0410 22:52:26.064316   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.064327   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:26.064335   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:26.064394   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:26.104381   57719 cri.go:89] found id: ""
	I0410 22:52:26.104426   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.104437   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:26.104445   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:26.104522   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:26.143382   57719 cri.go:89] found id: ""
	I0410 22:52:26.143407   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.143417   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:26.143424   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:26.143489   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:26.179609   57719 cri.go:89] found id: ""
	I0410 22:52:26.179635   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.179646   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:26.179652   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:26.179714   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:26.217660   57719 cri.go:89] found id: ""
	I0410 22:52:26.217689   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.217695   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:26.217701   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:26.217758   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:26.254914   57719 cri.go:89] found id: ""
	I0410 22:52:26.254946   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.254956   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:26.254963   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:26.255047   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:26.293738   57719 cri.go:89] found id: ""
	I0410 22:52:26.293769   57719 logs.go:276] 0 containers: []
	W0410 22:52:26.293779   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:26.293790   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:26.293809   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:26.366700   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:26.366725   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:26.366741   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:26.445143   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:26.445183   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:26.493175   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:26.493203   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:26.554952   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:26.554992   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:29.072225   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:29.087075   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:29.087150   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:29.131314   57719 cri.go:89] found id: ""
	I0410 22:52:29.131345   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.131357   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:29.131365   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:29.131427   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:29.169263   57719 cri.go:89] found id: ""
	I0410 22:52:29.169289   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.169298   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:29.169304   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:29.169357   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:29.209535   57719 cri.go:89] found id: ""
	I0410 22:52:29.209559   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.209570   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:29.209575   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:29.209630   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:29.251172   57719 cri.go:89] found id: ""
	I0410 22:52:29.251225   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.251233   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:29.251238   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:29.251290   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:29.296142   57719 cri.go:89] found id: ""
	I0410 22:52:29.296169   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.296179   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:29.296185   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:29.296245   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:29.336910   57719 cri.go:89] found id: ""
	I0410 22:52:29.336933   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.336940   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:29.336946   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:29.337003   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:29.396332   57719 cri.go:89] found id: ""
	I0410 22:52:29.396371   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.396382   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:29.396390   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:29.396475   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:29.438301   57719 cri.go:89] found id: ""
	I0410 22:52:29.438332   57719 logs.go:276] 0 containers: []
	W0410 22:52:29.438340   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:29.438348   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:29.438360   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:29.482687   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:29.482711   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:29.535115   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:29.535146   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:29.551736   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:29.551760   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:29.624162   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:29.624198   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:29.624213   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:29.517873   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:31.519737   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:27.650947   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:29.651296   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:32.150101   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:29.901370   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:31.902050   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:34.401849   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:32.204355   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:32.218239   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:32.218310   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:32.255412   57719 cri.go:89] found id: ""
	I0410 22:52:32.255440   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.255451   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:32.255458   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:32.255516   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:32.293553   57719 cri.go:89] found id: ""
	I0410 22:52:32.293580   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.293591   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:32.293604   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:32.293663   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:32.332814   57719 cri.go:89] found id: ""
	I0410 22:52:32.332846   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.332855   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:32.332862   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:32.332924   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:32.371312   57719 cri.go:89] found id: ""
	I0410 22:52:32.371347   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.371368   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:32.371376   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:32.371441   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:32.407630   57719 cri.go:89] found id: ""
	I0410 22:52:32.407652   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.407659   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:32.407664   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:32.407720   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:32.444878   57719 cri.go:89] found id: ""
	I0410 22:52:32.444904   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.444914   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:32.444923   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:32.444989   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:32.490540   57719 cri.go:89] found id: ""
	I0410 22:52:32.490567   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.490578   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:32.490586   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:32.490644   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:32.528911   57719 cri.go:89] found id: ""
	I0410 22:52:32.528953   57719 logs.go:276] 0 containers: []
	W0410 22:52:32.528961   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:32.528969   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:32.528979   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:32.608601   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:32.608626   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:32.608641   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:32.684840   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:32.684876   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:32.728092   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:32.728132   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:32.778491   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:32.778524   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:35.296228   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:35.310615   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:52:35.310705   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:52:35.377585   57719 cri.go:89] found id: ""
	I0410 22:52:35.377612   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.377623   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:52:35.377632   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:52:35.377692   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:52:35.417734   57719 cri.go:89] found id: ""
	I0410 22:52:35.417775   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.417796   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:52:35.417803   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:52:35.417864   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:52:34.017119   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:36.017526   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:34.150859   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:36.151112   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:36.402036   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:38.402201   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:35.456256   57719 cri.go:89] found id: ""
	I0410 22:52:35.456281   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.456291   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:52:35.456298   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:52:35.456382   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:52:35.495233   57719 cri.go:89] found id: ""
	I0410 22:52:35.495257   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.495267   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:52:35.495274   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:52:35.495333   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:52:35.535239   57719 cri.go:89] found id: ""
	I0410 22:52:35.535273   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.535284   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:52:35.535292   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:52:35.535352   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:52:35.571601   57719 cri.go:89] found id: ""
	I0410 22:52:35.571628   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.571638   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:52:35.571645   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:52:35.571708   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:52:35.612008   57719 cri.go:89] found id: ""
	I0410 22:52:35.612036   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.612045   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:52:35.612051   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:52:35.612099   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:52:35.649029   57719 cri.go:89] found id: ""
	I0410 22:52:35.649057   57719 logs.go:276] 0 containers: []
	W0410 22:52:35.649065   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:52:35.649073   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:52:35.649084   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:52:35.702630   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:52:35.702668   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:52:35.718404   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:52:35.718433   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:52:35.798380   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:52:35.798405   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:52:35.798420   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:52:35.874049   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:52:35.874085   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:52:38.416265   57719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:52:38.430921   57719 kubeadm.go:591] duration metric: took 4m3.090666464s to restartPrimaryControlPlane
	W0410 22:52:38.431006   57719 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0410 22:52:38.431030   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0410 22:52:41.138973   57719 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.707913754s)
	I0410 22:52:41.139063   57719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:52:41.155646   57719 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:52:41.166345   57719 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:52:41.176443   57719 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:52:41.176481   57719 kubeadm.go:156] found existing configuration files:
	
	I0410 22:52:41.176547   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:52:41.186887   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:52:41.186960   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:52:41.199740   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:52:41.209843   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:52:41.209901   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:52:41.219804   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:52:41.229739   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:52:41.229807   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:52:41.240127   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:52:41.249763   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:52:41.249824   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:52:41.260148   57719 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 22:52:41.334127   57719 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0410 22:52:41.334200   57719 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 22:52:41.506104   57719 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 22:52:41.506307   57719 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 22:52:41.506488   57719 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 22:52:41.715227   57719 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 22:52:38.519180   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:41.018674   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:38.649983   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:41.152610   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:41.717460   57719 out.go:204]   - Generating certificates and keys ...
	I0410 22:52:41.717564   57719 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 22:52:41.717654   57719 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 22:52:41.717781   57719 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0410 22:52:41.717898   57719 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0410 22:52:41.718004   57719 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0410 22:52:41.718099   57719 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0410 22:52:41.718203   57719 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0410 22:52:41.718550   57719 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0410 22:52:41.719083   57719 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0410 22:52:41.719413   57719 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0410 22:52:41.719571   57719 kubeadm.go:309] [certs] Using the existing "sa" key
	I0410 22:52:41.719675   57719 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 22:52:41.998202   57719 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 22:52:42.109508   57719 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 22:52:42.315545   57719 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 22:52:42.448910   57719 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 22:52:42.465903   57719 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 22:52:42.467312   57719 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 22:52:42.467387   57719 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 22:52:42.636790   57719 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 22:52:40.402237   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:42.404435   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:42.638969   57719 out.go:204]   - Booting up control plane ...
	I0410 22:52:42.639106   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 22:52:42.652152   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 22:52:42.653843   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 22:52:42.654719   57719 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 22:52:42.658006   57719 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0410 22:52:43.518416   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:46.017894   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:43.650778   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:46.149976   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:44.902059   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:46.902549   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:49.401695   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:48.517833   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:51.018924   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:48.150825   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:50.151391   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:51.901096   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:53.902619   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:53.518616   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:55.519254   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:52.649783   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:54.651766   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:56.655687   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:55.903916   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:58.400789   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:58.017685   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:00.517303   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:52:59.152346   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:01.651146   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:00.901531   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:03.400690   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:02.517569   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:04.517775   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:07.017655   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:03.651728   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:05.652505   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:05.901605   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:07.902363   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:09.018576   58186 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:11.510820   58186 pod_ready.go:81] duration metric: took 4m0.000124062s for pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace to be "Ready" ...
	E0410 22:53:11.510861   58186 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-4r9pl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0410 22:53:11.510885   58186 pod_ready.go:38] duration metric: took 4m10.548289153s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:53:11.510918   58186 kubeadm.go:591] duration metric: took 4m18.480793797s to restartPrimaryControlPlane
	W0410 22:53:11.510993   58186 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0410 22:53:11.511019   58186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0410 22:53:08.151155   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:10.151358   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:10.400722   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:12.401658   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:14.401745   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:12.652391   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:14.652682   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:17.149892   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:16.900482   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:18.900789   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:19.152154   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:21.649975   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:20.902068   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:23.401500   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:22.660165   57719 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0410 22:53:22.660260   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:53:22.660520   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:53:23.653457   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:26.149469   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:25.903070   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:28.400947   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:27.660705   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:53:27.660919   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:53:28.150895   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:30.650254   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:30.401054   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:32.401994   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:32.654427   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:35.149580   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:37.150506   58701 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:37.150533   58701 pod_ready.go:81] duration metric: took 4m0.00757056s for pod "metrics-server-57f55c9bc5-9l2hc" in "kube-system" namespace to be "Ready" ...
	E0410 22:53:37.150544   58701 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0410 22:53:37.150552   58701 pod_ready.go:38] duration metric: took 4m5.55870495s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:53:37.150570   58701 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:53:37.150602   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:53:37.150659   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:53:37.213472   58701 cri.go:89] found id: "74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:37.213499   58701 cri.go:89] found id: ""
	I0410 22:53:37.213511   58701 logs.go:276] 1 containers: [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c]
	I0410 22:53:37.213561   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.218928   58701 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:53:37.218997   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:53:37.260045   58701 cri.go:89] found id: "34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:37.260066   58701 cri.go:89] found id: ""
	I0410 22:53:37.260073   58701 logs.go:276] 1 containers: [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072]
	I0410 22:53:37.260116   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.265329   58701 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:53:37.265393   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:53:37.306649   58701 cri.go:89] found id: "d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:37.306674   58701 cri.go:89] found id: ""
	I0410 22:53:37.306682   58701 logs.go:276] 1 containers: [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3]
	I0410 22:53:37.306729   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.311163   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:53:37.311213   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:53:37.351855   58701 cri.go:89] found id: "b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:37.351883   58701 cri.go:89] found id: ""
	I0410 22:53:37.351890   58701 logs.go:276] 1 containers: [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14]
	I0410 22:53:37.351937   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.356427   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:53:37.356497   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:53:34.900998   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:36.901173   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:39.400680   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:37.661409   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:53:37.661698   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:53:37.399224   58701 cri.go:89] found id: "7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:37.399248   58701 cri.go:89] found id: ""
	I0410 22:53:37.399257   58701 logs.go:276] 1 containers: [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b]
	I0410 22:53:37.399315   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.404314   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:53:37.404380   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:53:37.444169   58701 cri.go:89] found id: "c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:37.444196   58701 cri.go:89] found id: ""
	I0410 22:53:37.444205   58701 logs.go:276] 1 containers: [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39]
	I0410 22:53:37.444264   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.448618   58701 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:53:37.448693   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:53:37.487481   58701 cri.go:89] found id: ""
	I0410 22:53:37.487507   58701 logs.go:276] 0 containers: []
	W0410 22:53:37.487514   58701 logs.go:278] No container was found matching "kindnet"
	I0410 22:53:37.487519   58701 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0410 22:53:37.487566   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0410 22:53:37.531000   58701 cri.go:89] found id: "3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:37.531018   58701 cri.go:89] found id: "912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:37.531022   58701 cri.go:89] found id: ""
	I0410 22:53:37.531029   58701 logs.go:276] 2 containers: [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7]
	I0410 22:53:37.531081   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.535679   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:37.539974   58701 logs.go:123] Gathering logs for kubelet ...
	I0410 22:53:37.539998   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:53:37.601043   58701 logs.go:123] Gathering logs for dmesg ...
	I0410 22:53:37.601086   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:53:37.616427   58701 logs.go:123] Gathering logs for kube-apiserver [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c] ...
	I0410 22:53:37.616458   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:37.669951   58701 logs.go:123] Gathering logs for etcd [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072] ...
	I0410 22:53:37.669983   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:37.716243   58701 logs.go:123] Gathering logs for kube-controller-manager [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39] ...
	I0410 22:53:37.716273   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:37.774644   58701 logs.go:123] Gathering logs for storage-provisioner [912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7] ...
	I0410 22:53:37.774678   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:37.821033   58701 logs.go:123] Gathering logs for container status ...
	I0410 22:53:37.821077   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:53:37.883644   58701 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:53:37.883678   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0410 22:53:38.019289   58701 logs.go:123] Gathering logs for coredns [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3] ...
	I0410 22:53:38.019320   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:38.057708   58701 logs.go:123] Gathering logs for kube-scheduler [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14] ...
	I0410 22:53:38.057739   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:38.100119   58701 logs.go:123] Gathering logs for kube-proxy [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b] ...
	I0410 22:53:38.100149   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:38.143845   58701 logs.go:123] Gathering logs for storage-provisioner [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195] ...
	I0410 22:53:38.143875   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:38.186718   58701 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:53:38.186749   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:53:41.168951   58701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:53:41.186828   58701 api_server.go:72] duration metric: took 4m17.343179611s to wait for apiserver process to appear ...
	I0410 22:53:41.186866   58701 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:53:41.186911   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:53:41.186972   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:53:41.228167   58701 cri.go:89] found id: "74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:41.228194   58701 cri.go:89] found id: ""
	I0410 22:53:41.228201   58701 logs.go:276] 1 containers: [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c]
	I0410 22:53:41.228251   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.232754   58701 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:53:41.232812   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:53:41.271497   58701 cri.go:89] found id: "34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:41.271519   58701 cri.go:89] found id: ""
	I0410 22:53:41.271527   58701 logs.go:276] 1 containers: [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072]
	I0410 22:53:41.271575   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.276165   58701 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:53:41.276234   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:53:41.319164   58701 cri.go:89] found id: "d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:41.319187   58701 cri.go:89] found id: ""
	I0410 22:53:41.319195   58701 logs.go:276] 1 containers: [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3]
	I0410 22:53:41.319251   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.323627   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:53:41.323696   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:53:41.366648   58701 cri.go:89] found id: "b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:41.366671   58701 cri.go:89] found id: ""
	I0410 22:53:41.366678   58701 logs.go:276] 1 containers: [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14]
	I0410 22:53:41.366733   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.371132   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:53:41.371197   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:53:41.412956   58701 cri.go:89] found id: "7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:41.412974   58701 cri.go:89] found id: ""
	I0410 22:53:41.412982   58701 logs.go:276] 1 containers: [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b]
	I0410 22:53:41.413034   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.417441   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:53:41.417495   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:53:41.460008   58701 cri.go:89] found id: "c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:41.460037   58701 cri.go:89] found id: ""
	I0410 22:53:41.460048   58701 logs.go:276] 1 containers: [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39]
	I0410 22:53:41.460105   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.464422   58701 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:53:41.464492   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:53:41.504095   58701 cri.go:89] found id: ""
	I0410 22:53:41.504126   58701 logs.go:276] 0 containers: []
	W0410 22:53:41.504134   58701 logs.go:278] No container was found matching "kindnet"
	I0410 22:53:41.504140   58701 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0410 22:53:41.504199   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0410 22:53:41.543443   58701 cri.go:89] found id: "3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:41.543467   58701 cri.go:89] found id: "912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:41.543473   58701 cri.go:89] found id: ""
	I0410 22:53:41.543481   58701 logs.go:276] 2 containers: [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7]
	I0410 22:53:41.543540   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.548182   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:41.552917   58701 logs.go:123] Gathering logs for kube-proxy [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b] ...
	I0410 22:53:41.552941   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:41.601620   58701 logs.go:123] Gathering logs for kube-controller-manager [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39] ...
	I0410 22:53:41.601652   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:41.653090   58701 logs.go:123] Gathering logs for storage-provisioner [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195] ...
	I0410 22:53:41.653124   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:41.692683   58701 logs.go:123] Gathering logs for storage-provisioner [912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7] ...
	I0410 22:53:41.692711   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:41.736312   58701 logs.go:123] Gathering logs for dmesg ...
	I0410 22:53:41.736353   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:53:41.753242   58701 logs.go:123] Gathering logs for kube-apiserver [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c] ...
	I0410 22:53:41.753283   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:41.812881   58701 logs.go:123] Gathering logs for etcd [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072] ...
	I0410 22:53:41.812910   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:41.860686   58701 logs.go:123] Gathering logs for coredns [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3] ...
	I0410 22:53:41.860714   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:41.902523   58701 logs.go:123] Gathering logs for container status ...
	I0410 22:53:41.902546   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:53:41.945812   58701 logs.go:123] Gathering logs for kubelet ...
	I0410 22:53:41.945848   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:53:42.001012   58701 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:53:42.001046   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0410 22:53:42.123971   58701 logs.go:123] Gathering logs for kube-scheduler [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14] ...
	I0410 22:53:42.124000   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:42.168773   58701 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:53:42.168806   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:53:41.405604   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:43.901172   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:43.595677   58186 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.084634816s)
	I0410 22:53:43.595765   58186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:53:43.613470   58186 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:53:43.624876   58186 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:53:43.638564   58186 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:53:43.638592   58186 kubeadm.go:156] found existing configuration files:
	
	I0410 22:53:43.638641   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:53:43.652554   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:53:43.652608   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:53:43.664263   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:53:43.674443   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:53:43.674497   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:53:43.695444   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:53:43.705446   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:53:43.705518   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:53:43.716451   58186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:53:43.726343   58186 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:53:43.726407   58186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:53:43.736859   58186 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 22:53:43.957994   58186 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 22:53:45.115742   58701 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8444/healthz ...
	I0410 22:53:45.120239   58701 api_server.go:279] https://192.168.72.170:8444/healthz returned 200:
	ok
	I0410 22:53:45.121662   58701 api_server.go:141] control plane version: v1.29.3
	I0410 22:53:45.121690   58701 api_server.go:131] duration metric: took 3.934815447s to wait for apiserver health ...
	I0410 22:53:45.121699   58701 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:53:45.121727   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:53:45.121780   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:53:45.172291   58701 cri.go:89] found id: "74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:45.172315   58701 cri.go:89] found id: ""
	I0410 22:53:45.172324   58701 logs.go:276] 1 containers: [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c]
	I0410 22:53:45.172382   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.177041   58701 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:53:45.177103   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:53:45.213853   58701 cri.go:89] found id: "34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:45.213880   58701 cri.go:89] found id: ""
	I0410 22:53:45.213889   58701 logs.go:276] 1 containers: [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072]
	I0410 22:53:45.213944   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.218478   58701 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:53:45.218546   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:53:45.268753   58701 cri.go:89] found id: "d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:45.268779   58701 cri.go:89] found id: ""
	I0410 22:53:45.268792   58701 logs.go:276] 1 containers: [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3]
	I0410 22:53:45.268843   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.273223   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:53:45.273291   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:53:45.314032   58701 cri.go:89] found id: "b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:45.314057   58701 cri.go:89] found id: ""
	I0410 22:53:45.314066   58701 logs.go:276] 1 containers: [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14]
	I0410 22:53:45.314115   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.318671   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:53:45.318740   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:53:45.356139   58701 cri.go:89] found id: "7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:45.356167   58701 cri.go:89] found id: ""
	I0410 22:53:45.356177   58701 logs.go:276] 1 containers: [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b]
	I0410 22:53:45.356234   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.361449   58701 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:53:45.361520   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:53:45.405153   58701 cri.go:89] found id: "c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:45.405174   58701 cri.go:89] found id: ""
	I0410 22:53:45.405181   58701 logs.go:276] 1 containers: [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39]
	I0410 22:53:45.405230   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.409795   58701 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:53:45.409871   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:53:45.451984   58701 cri.go:89] found id: ""
	I0410 22:53:45.452016   58701 logs.go:276] 0 containers: []
	W0410 22:53:45.452026   58701 logs.go:278] No container was found matching "kindnet"
	I0410 22:53:45.452034   58701 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0410 22:53:45.452095   58701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0410 22:53:45.491612   58701 cri.go:89] found id: "3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:45.491650   58701 cri.go:89] found id: "912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:45.491656   58701 cri.go:89] found id: ""
	I0410 22:53:45.491665   58701 logs.go:276] 2 containers: [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7]
	I0410 22:53:45.491724   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.496253   58701 ssh_runner.go:195] Run: which crictl
	I0410 22:53:45.500723   58701 logs.go:123] Gathering logs for kube-proxy [7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b] ...
	I0410 22:53:45.500751   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c920ae26b3ccfa6c4e689e104f7293f8919564f56c0e800d9bd405c9f2da90b"
	I0410 22:53:45.557083   58701 logs.go:123] Gathering logs for kube-controller-manager [c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39] ...
	I0410 22:53:45.557118   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9b5f1abd23217653abc2327471cd8a601985f6eac2835e61cb655f9efaf9f39"
	I0410 22:53:45.616768   58701 logs.go:123] Gathering logs for storage-provisioner [3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195] ...
	I0410 22:53:45.616804   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e97b78e0d5a11ff687a960411c75f0c717077412c87e21ac7e7670d974a6195"
	I0410 22:53:45.664097   58701 logs.go:123] Gathering logs for storage-provisioner [912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7] ...
	I0410 22:53:45.664133   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912eddb6d12e874548fa50f1d7c48002b4bbd65113cb32219ba7819cf4f7e1b7"
	I0410 22:53:45.707920   58701 logs.go:123] Gathering logs for container status ...
	I0410 22:53:45.707957   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0410 22:53:45.751862   58701 logs.go:123] Gathering logs for kube-apiserver [74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c] ...
	I0410 22:53:45.751898   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74618e834b6295a172de1af7916f7ddc361d5b80e394ecb8b4ae171e87cea39c"
	I0410 22:53:45.806584   58701 logs.go:123] Gathering logs for coredns [d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3] ...
	I0410 22:53:45.806619   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0547fcd346555824c446a2fa52ddb09ebbf2279f6f6e66e317b8df617f244b3"
	I0410 22:53:45.846145   58701 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:53:45.846170   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0410 22:53:45.970766   58701 logs.go:123] Gathering logs for etcd [34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072] ...
	I0410 22:53:45.970796   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34b1b1f972a8edbd7d863e417807a474333ae48ec531bb78744c20d760e73072"
	I0410 22:53:46.024049   58701 logs.go:123] Gathering logs for kube-scheduler [b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14] ...
	I0410 22:53:46.024081   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9d427d7dee4f1a686d43a5ecf6198ad6de1f62d5ab411c39f61f29373341b14"
	I0410 22:53:46.067009   58701 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:53:46.067048   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:53:46.462765   58701 logs.go:123] Gathering logs for kubelet ...
	I0410 22:53:46.462812   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:53:46.520007   58701 logs.go:123] Gathering logs for dmesg ...
	I0410 22:53:46.520049   58701 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:53:49.047137   58701 system_pods.go:59] 8 kube-system pods found
	I0410 22:53:49.047166   58701 system_pods.go:61] "coredns-76f75df574-ghnvx" [88ebd9b0-ecf0-4037-b5b0-547dad2354ba] Running
	I0410 22:53:49.047170   58701 system_pods.go:61] "etcd-default-k8s-diff-port-519831" [e03c3075-b377-41a8-8f68-5a424fafd6a1] Running
	I0410 22:53:49.047174   58701 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-519831" [a538137b-08ce-4feb-a420-2e3ad7125b14] Running
	I0410 22:53:49.047177   58701 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-519831" [05d154e2-69cf-40dc-a4ba-ed0d65be4365] Running
	I0410 22:53:49.047180   58701 system_pods.go:61] "kube-proxy-5mbwx" [44724487-9539-4079-9fd6-40cb70208b95] Running
	I0410 22:53:49.047183   58701 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-519831" [a587ef20-140c-40b0-b306-d5f5c595f4a6] Running
	I0410 22:53:49.047189   58701 system_pods.go:61] "metrics-server-57f55c9bc5-9l2hc" [2f5cda2f-4d8f-4798-954e-5ef588f2b88f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:53:49.047192   58701 system_pods.go:61] "storage-provisioner" [e4e09f42-54ba-480e-a020-1ca071a54558] Running
	I0410 22:53:49.047201   58701 system_pods.go:74] duration metric: took 3.925495812s to wait for pod list to return data ...
	I0410 22:53:49.047208   58701 default_sa.go:34] waiting for default service account to be created ...
	I0410 22:53:49.050341   58701 default_sa.go:45] found service account: "default"
	I0410 22:53:49.050363   58701 default_sa.go:55] duration metric: took 3.148222ms for default service account to be created ...
	I0410 22:53:49.050371   58701 system_pods.go:116] waiting for k8s-apps to be running ...
	I0410 22:53:49.056364   58701 system_pods.go:86] 8 kube-system pods found
	I0410 22:53:49.056390   58701 system_pods.go:89] "coredns-76f75df574-ghnvx" [88ebd9b0-ecf0-4037-b5b0-547dad2354ba] Running
	I0410 22:53:49.056414   58701 system_pods.go:89] "etcd-default-k8s-diff-port-519831" [e03c3075-b377-41a8-8f68-5a424fafd6a1] Running
	I0410 22:53:49.056423   58701 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-519831" [a538137b-08ce-4feb-a420-2e3ad7125b14] Running
	I0410 22:53:49.056431   58701 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-519831" [05d154e2-69cf-40dc-a4ba-ed0d65be4365] Running
	I0410 22:53:49.056437   58701 system_pods.go:89] "kube-proxy-5mbwx" [44724487-9539-4079-9fd6-40cb70208b95] Running
	I0410 22:53:49.056444   58701 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-519831" [a587ef20-140c-40b0-b306-d5f5c595f4a6] Running
	I0410 22:53:49.056455   58701 system_pods.go:89] "metrics-server-57f55c9bc5-9l2hc" [2f5cda2f-4d8f-4798-954e-5ef588f2b88f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:53:49.056462   58701 system_pods.go:89] "storage-provisioner" [e4e09f42-54ba-480e-a020-1ca071a54558] Running
	I0410 22:53:49.056475   58701 system_pods.go:126] duration metric: took 6.097239ms to wait for k8s-apps to be running ...
	I0410 22:53:49.056492   58701 system_svc.go:44] waiting for kubelet service to be running ....
	I0410 22:53:49.056537   58701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:53:49.077239   58701 system_svc.go:56] duration metric: took 20.737127ms WaitForService to wait for kubelet
	I0410 22:53:49.077269   58701 kubeadm.go:576] duration metric: took 4m25.233626302s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 22:53:49.077297   58701 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:53:49.080463   58701 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:53:49.080486   58701 node_conditions.go:123] node cpu capacity is 2
	I0410 22:53:49.080497   58701 node_conditions.go:105] duration metric: took 3.195662ms to run NodePressure ...
	I0410 22:53:49.080508   58701 start.go:240] waiting for startup goroutines ...
	I0410 22:53:49.080515   58701 start.go:245] waiting for cluster config update ...
	I0410 22:53:49.080525   58701 start.go:254] writing updated cluster config ...
	I0410 22:53:49.080805   58701 ssh_runner.go:195] Run: rm -f paused
	I0410 22:53:49.141489   58701 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0410 22:53:49.143597   58701 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-519831" cluster and "default" namespace by default
	I0410 22:53:45.903632   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:48.403981   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:53.064071   58186 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0410 22:53:53.064154   58186 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 22:53:53.064260   58186 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 22:53:53.064429   58186 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 22:53:53.064574   58186 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 22:53:53.064670   58186 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 22:53:53.066595   58186 out.go:204]   - Generating certificates and keys ...
	I0410 22:53:53.066703   58186 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 22:53:53.066808   58186 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 22:53:53.066929   58186 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0410 22:53:53.067023   58186 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0410 22:53:53.067155   58186 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0410 22:53:53.067235   58186 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0410 22:53:53.067329   58186 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0410 22:53:53.067433   58186 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0410 22:53:53.067546   58186 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0410 22:53:53.067655   58186 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0410 22:53:53.067733   58186 kubeadm.go:309] [certs] Using the existing "sa" key
	I0410 22:53:53.067890   58186 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 22:53:53.067961   58186 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 22:53:53.068049   58186 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0410 22:53:53.068132   58186 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 22:53:53.068232   58186 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 22:53:53.068310   58186 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 22:53:53.068379   58186 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 22:53:53.068510   58186 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 22:53:53.070126   58186 out.go:204]   - Booting up control plane ...
	I0410 22:53:53.070219   58186 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 22:53:53.070324   58186 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 22:53:53.070425   58186 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 22:53:53.070565   58186 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 22:53:53.070686   58186 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 22:53:53.070748   58186 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 22:53:53.070973   58186 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0410 22:53:53.071083   58186 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002820 seconds
	I0410 22:53:53.071249   58186 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0410 22:53:53.071424   58186 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0410 22:53:53.071485   58186 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0410 22:53:53.071624   58186 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-706500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0410 22:53:53.071680   58186 kubeadm.go:309] [bootstrap-token] Using token: 0wvld6.jntz9ft9bn5g46le
	I0410 22:53:53.073567   58186 out.go:204]   - Configuring RBAC rules ...
	I0410 22:53:53.073708   58186 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0410 22:53:53.073819   58186 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0410 22:53:53.074015   58186 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0410 22:53:53.074206   58186 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0410 22:53:53.074370   58186 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0410 22:53:53.074548   58186 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0410 22:53:53.074726   58186 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0410 22:53:53.074798   58186 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0410 22:53:53.074873   58186 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0410 22:53:53.074884   58186 kubeadm.go:309] 
	I0410 22:53:53.074956   58186 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0410 22:53:53.074978   58186 kubeadm.go:309] 
	I0410 22:53:53.075077   58186 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0410 22:53:53.075088   58186 kubeadm.go:309] 
	I0410 22:53:53.075119   58186 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0410 22:53:53.075191   58186 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0410 22:53:53.075262   58186 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0410 22:53:53.075273   58186 kubeadm.go:309] 
	I0410 22:53:53.075337   58186 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0410 22:53:53.075353   58186 kubeadm.go:309] 
	I0410 22:53:53.075419   58186 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0410 22:53:53.075437   58186 kubeadm.go:309] 
	I0410 22:53:53.075503   58186 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0410 22:53:53.075621   58186 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0410 22:53:53.075714   58186 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0410 22:53:53.075724   58186 kubeadm.go:309] 
	I0410 22:53:53.075829   58186 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0410 22:53:53.075936   58186 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0410 22:53:53.075953   58186 kubeadm.go:309] 
	I0410 22:53:53.076058   58186 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 0wvld6.jntz9ft9bn5g46le \
	I0410 22:53:53.076196   58186 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f23510b5ac28f8f42919becc6f4945068a05a3fcf79470ca514b0af3de7a2bb0 \
	I0410 22:53:53.076253   58186 kubeadm.go:309] 	--control-plane 
	I0410 22:53:53.076270   58186 kubeadm.go:309] 
	I0410 22:53:53.076387   58186 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0410 22:53:53.076422   58186 kubeadm.go:309] 
	I0410 22:53:53.076516   58186 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 0wvld6.jntz9ft9bn5g46le \
	I0410 22:53:53.076661   58186 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f23510b5ac28f8f42919becc6f4945068a05a3fcf79470ca514b0af3de7a2bb0 
	I0410 22:53:53.076711   58186 cni.go:84] Creating CNI manager for ""
	I0410 22:53:53.076726   58186 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:53:53.078503   58186 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0410 22:53:50.902397   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:53.403449   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:53.079631   58186 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0410 22:53:53.132043   58186 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0410 22:53:53.167760   58186 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0410 22:53:53.167847   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:53.167870   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-706500 minikube.k8s.io/updated_at=2024_04_10T22_53_53_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2 minikube.k8s.io/name=embed-certs-706500 minikube.k8s.io/primary=true
	I0410 22:53:53.511359   58186 ops.go:34] apiserver oom_adj: -16
	I0410 22:53:53.511506   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:54.012080   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:54.511816   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:55.011883   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:55.511809   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:56.011572   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:56.512114   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:57.011878   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:55.900548   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:57.901541   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:53:57.662444   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:53:57.662687   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:53:57.511726   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:58.011563   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:58.512617   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:59.012145   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:53:59.512448   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:00.012278   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:00.512290   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:01.012507   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:01.512415   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:02.011660   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:00.401622   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:54:02.902558   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:54:02.511581   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:03.012326   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:03.512539   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:04.012085   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:04.512496   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:05.011911   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:05.512180   58186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:05.619801   58186 kubeadm.go:1107] duration metric: took 12.452015223s to wait for elevateKubeSystemPrivileges
	W0410 22:54:05.619839   58186 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0410 22:54:05.619847   58186 kubeadm.go:393] duration metric: took 5m12.640298551s to StartCluster
	I0410 22:54:05.619862   58186 settings.go:142] acquiring lock: {Name:mk5dc8e9a07a91433645b19ffba859d70c73be71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:54:05.619936   58186 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:54:05.621989   58186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/kubeconfig: {Name:mkd6f498bec10d7b0d2291f83f5e27766227bc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:54:05.622331   58186 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0410 22:54:05.624233   58186 out.go:177] * Verifying Kubernetes components...
	I0410 22:54:05.622444   58186 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0410 22:54:05.622516   58186 config.go:182] Loaded profile config "embed-certs-706500": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:54:05.625850   58186 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-706500"
	I0410 22:54:05.625872   58186 addons.go:69] Setting default-storageclass=true in profile "embed-certs-706500"
	I0410 22:54:05.625882   58186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:54:05.625893   58186 addons.go:69] Setting metrics-server=true in profile "embed-certs-706500"
	I0410 22:54:05.625924   58186 addons.go:234] Setting addon metrics-server=true in "embed-certs-706500"
	W0410 22:54:05.625930   58186 addons.go:243] addon metrics-server should already be in state true
	I0410 22:54:05.625954   58186 host.go:66] Checking if "embed-certs-706500" exists ...
	I0410 22:54:05.625888   58186 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-706500"
	I0410 22:54:05.625903   58186 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-706500"
	W0410 22:54:05.625982   58186 addons.go:243] addon storage-provisioner should already be in state true
	I0410 22:54:05.626012   58186 host.go:66] Checking if "embed-certs-706500" exists ...
	I0410 22:54:05.626365   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.626407   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.626421   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.626440   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.626441   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.626442   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.643647   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44787
	I0410 22:54:05.643758   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41863
	I0410 22:54:05.644070   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45225
	I0410 22:54:05.644101   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.644253   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.644825   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.644856   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.644825   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.644883   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.644915   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.645239   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.645419   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetState
	I0410 22:54:05.645475   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.645489   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.645501   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.646021   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.646035   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.646062   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.646588   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.646619   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.648242   58186 addons.go:234] Setting addon default-storageclass=true in "embed-certs-706500"
	W0410 22:54:05.648261   58186 addons.go:243] addon default-storageclass should already be in state true
	I0410 22:54:05.648282   58186 host.go:66] Checking if "embed-certs-706500" exists ...
	I0410 22:54:05.648555   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.648582   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.661773   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37117
	I0410 22:54:05.662556   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.663049   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.663073   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.663474   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.663708   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetState
	I0410 22:54:05.664716   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44297
	I0410 22:54:05.665027   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.665617   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.665634   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.665706   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46269
	I0410 22:54:05.666342   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.666343   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:54:05.665946   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.668790   58186 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:54:05.667015   58186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:54:05.667244   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.670336   58186 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:54:05.670357   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0410 22:54:05.670374   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:54:05.668826   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.668843   58186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:54:05.671350   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.671633   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetState
	I0410 22:54:05.673653   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:54:05.675310   58186 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0410 22:54:05.674011   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.674533   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:54:05.676671   58186 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0410 22:54:05.676677   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:54:05.676690   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0410 22:54:05.676710   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:54:05.676713   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.676821   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:54:05.676976   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:54:05.677117   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:54:05.680146   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.680927   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:54:05.680964   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.681136   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:54:05.681515   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:54:05.681681   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:54:05.681834   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:54:05.688424   58186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43155
	I0410 22:54:05.688861   58186 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:54:05.689299   58186 main.go:141] libmachine: Using API Version  1
	I0410 22:54:05.689320   58186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:54:05.689589   58186 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:54:05.689741   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetState
	I0410 22:54:05.691090   58186 main.go:141] libmachine: (embed-certs-706500) Calling .DriverName
	I0410 22:54:05.691335   58186 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0410 22:54:05.691353   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0410 22:54:05.691369   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHHostname
	I0410 22:54:05.694552   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.695080   58186 main.go:141] libmachine: (embed-certs-706500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:c4:8c", ip: ""} in network mk-embed-certs-706500: {Iface:virbr3 ExpiryTime:2024-04-10 23:48:39 +0000 UTC Type:0 Mac:52:54:00:36:c4:8c Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:embed-certs-706500 Clientid:01:52:54:00:36:c4:8c}
	I0410 22:54:05.695118   58186 main.go:141] libmachine: (embed-certs-706500) DBG | domain embed-certs-706500 has defined IP address 192.168.39.10 and MAC address 52:54:00:36:c4:8c in network mk-embed-certs-706500
	I0410 22:54:05.695426   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHPort
	I0410 22:54:05.695771   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHKeyPath
	I0410 22:54:05.695939   58186 main.go:141] libmachine: (embed-certs-706500) Calling .GetSSHUsername
	I0410 22:54:05.696084   58186 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/embed-certs-706500/id_rsa Username:docker}
	I0410 22:54:05.860032   58186 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:54:05.881036   58186 node_ready.go:35] waiting up to 6m0s for node "embed-certs-706500" to be "Ready" ...
	I0410 22:54:05.891218   58186 node_ready.go:49] node "embed-certs-706500" has status "Ready":"True"
	I0410 22:54:05.891237   58186 node_ready.go:38] duration metric: took 10.166143ms for node "embed-certs-706500" to be "Ready" ...
	I0410 22:54:05.891247   58186 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:54:05.899013   58186 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-bvdp5" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:06.064031   58186 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0410 22:54:06.064051   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0410 22:54:06.065727   58186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:54:06.075127   58186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0410 22:54:06.140574   58186 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0410 22:54:06.140607   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0410 22:54:06.216389   58186 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:54:06.216428   58186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0410 22:54:06.356117   58186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:54:07.409983   58186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.334826611s)
	I0410 22:54:07.410039   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.410052   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.410103   58186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.344342448s)
	I0410 22:54:07.410184   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.410199   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.410313   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Closing plugin on server side
	I0410 22:54:07.410321   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.410362   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.410371   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.410382   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.410452   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.410505   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.410519   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.410531   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.410465   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Closing plugin on server side
	I0410 22:54:07.410678   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.410765   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.410802   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.410820   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.410822   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Closing plugin on server side
	I0410 22:54:07.438723   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.438742   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.439085   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.439104   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.439085   58186 main.go:141] libmachine: (embed-certs-706500) DBG | Closing plugin on server side
	I0410 22:54:07.738187   58186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.382031326s)
	I0410 22:54:07.738252   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.738267   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.738556   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.738586   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.738597   58186 main.go:141] libmachine: Making call to close driver server
	I0410 22:54:07.738604   58186 main.go:141] libmachine: (embed-certs-706500) Calling .Close
	I0410 22:54:07.738865   58186 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:54:07.738885   58186 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:54:07.738908   58186 addons.go:470] Verifying addon metrics-server=true in "embed-certs-706500"
	I0410 22:54:07.741639   58186 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0410 22:54:05.403374   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:54:07.903041   57270 pod_ready.go:102] pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace has status "Ready":"False"
	I0410 22:54:08.895154   57270 pod_ready.go:81] duration metric: took 4m0.000708165s for pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace to be "Ready" ...
	E0410 22:54:08.895186   57270 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-pw276" in "kube-system" namespace to be "Ready" (will not retry!)
	I0410 22:54:08.895214   57270 pod_ready.go:38] duration metric: took 4m14.550044852s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:54:08.895246   57270 kubeadm.go:591] duration metric: took 4m22.444968141s to restartPrimaryControlPlane
	W0410 22:54:08.895308   57270 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0410 22:54:08.895339   57270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0410 22:54:07.742954   58186 addons.go:505] duration metric: took 2.120520274s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0410 22:54:07.910203   58186 pod_ready.go:102] pod "coredns-76f75df574-bvdp5" in "kube-system" namespace has status "Ready":"False"
	I0410 22:54:08.906369   58186 pod_ready.go:92] pod "coredns-76f75df574-bvdp5" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:08.906394   58186 pod_ready.go:81] duration metric: took 3.007348288s for pod "coredns-76f75df574-bvdp5" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.906407   58186 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-v2pp5" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.913564   58186 pod_ready.go:92] pod "coredns-76f75df574-v2pp5" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:08.913582   58186 pod_ready.go:81] duration metric: took 7.168463ms for pod "coredns-76f75df574-v2pp5" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.913592   58186 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.919270   58186 pod_ready.go:92] pod "etcd-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:08.919296   58186 pod_ready.go:81] duration metric: took 5.696297ms for pod "etcd-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.919308   58186 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.924389   58186 pod_ready.go:92] pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:08.924430   58186 pod_ready.go:81] duration metric: took 5.111624ms for pod "kube-apiserver-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.924443   58186 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.929296   58186 pod_ready.go:92] pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:08.929320   58186 pod_ready.go:81] duration metric: took 4.869073ms for pod "kube-controller-manager-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:08.929333   58186 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xj5nq" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:09.305730   58186 pod_ready.go:92] pod "kube-proxy-xj5nq" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:09.305756   58186 pod_ready.go:81] duration metric: took 376.415901ms for pod "kube-proxy-xj5nq" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:09.305770   58186 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:09.703841   58186 pod_ready.go:92] pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace has status "Ready":"True"
	I0410 22:54:09.703869   58186 pod_ready.go:81] duration metric: took 398.090582ms for pod "kube-scheduler-embed-certs-706500" in "kube-system" namespace to be "Ready" ...
	I0410 22:54:09.703881   58186 pod_ready.go:38] duration metric: took 3.812625835s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:54:09.703898   58186 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:54:09.703957   58186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:54:09.720728   58186 api_server.go:72] duration metric: took 4.098354983s to wait for apiserver process to appear ...
	I0410 22:54:09.720763   58186 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:54:09.720786   58186 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0410 22:54:09.726522   58186 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I0410 22:54:09.727951   58186 api_server.go:141] control plane version: v1.29.3
	I0410 22:54:09.727979   58186 api_server.go:131] duration metric: took 7.20731ms to wait for apiserver health ...
	I0410 22:54:09.727989   58186 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:54:09.908166   58186 system_pods.go:59] 9 kube-system pods found
	I0410 22:54:09.908203   58186 system_pods.go:61] "coredns-76f75df574-bvdp5" [1cc8a326-77ef-469f-abf7-082ff8a44782] Running
	I0410 22:54:09.908212   58186 system_pods.go:61] "coredns-76f75df574-v2pp5" [2138fb5e-9c16-4a25-85d3-3d84b361a1e8] Running
	I0410 22:54:09.908217   58186 system_pods.go:61] "etcd-embed-certs-706500" [4a4b25f6-f8b7-49a2-9dfb-74d480775de7] Running
	I0410 22:54:09.908222   58186 system_pods.go:61] "kube-apiserver-embed-certs-706500" [33bf3126-e3fa-49f8-829d-8fb5ab407062] Running
	I0410 22:54:09.908227   58186 system_pods.go:61] "kube-controller-manager-embed-certs-706500" [97ca8487-eb31-43f8-ab20-873a134bdcad] Running
	I0410 22:54:09.908232   58186 system_pods.go:61] "kube-proxy-xj5nq" [c1bb1878-3e4b-4647-a3a7-cb327ccbd364] Running
	I0410 22:54:09.908236   58186 system_pods.go:61] "kube-scheduler-embed-certs-706500" [977f178e-11a1-46a9-87a1-04a5a915c267] Running
	I0410 22:54:09.908246   58186 system_pods.go:61] "metrics-server-57f55c9bc5-9mrmz" [a4ccd29a-d27e-4291-ac8c-3135d65f8a2a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:54:09.908251   58186 system_pods.go:61] "storage-provisioner" [8ad8e533-69ca-4eb5-9595-e6808dc0ff1a] Running
	I0410 22:54:09.908263   58186 system_pods.go:74] duration metric: took 180.267138ms to wait for pod list to return data ...
	I0410 22:54:09.908276   58186 default_sa.go:34] waiting for default service account to be created ...
	I0410 22:54:10.103556   58186 default_sa.go:45] found service account: "default"
	I0410 22:54:10.103586   58186 default_sa.go:55] duration metric: took 195.301798ms for default service account to be created ...
	I0410 22:54:10.103597   58186 system_pods.go:116] waiting for k8s-apps to be running ...
	I0410 22:54:10.309537   58186 system_pods.go:86] 9 kube-system pods found
	I0410 22:54:10.309566   58186 system_pods.go:89] "coredns-76f75df574-bvdp5" [1cc8a326-77ef-469f-abf7-082ff8a44782] Running
	I0410 22:54:10.309572   58186 system_pods.go:89] "coredns-76f75df574-v2pp5" [2138fb5e-9c16-4a25-85d3-3d84b361a1e8] Running
	I0410 22:54:10.309578   58186 system_pods.go:89] "etcd-embed-certs-706500" [4a4b25f6-f8b7-49a2-9dfb-74d480775de7] Running
	I0410 22:54:10.309583   58186 system_pods.go:89] "kube-apiserver-embed-certs-706500" [33bf3126-e3fa-49f8-829d-8fb5ab407062] Running
	I0410 22:54:10.309588   58186 system_pods.go:89] "kube-controller-manager-embed-certs-706500" [97ca8487-eb31-43f8-ab20-873a134bdcad] Running
	I0410 22:54:10.309592   58186 system_pods.go:89] "kube-proxy-xj5nq" [c1bb1878-3e4b-4647-a3a7-cb327ccbd364] Running
	I0410 22:54:10.309596   58186 system_pods.go:89] "kube-scheduler-embed-certs-706500" [977f178e-11a1-46a9-87a1-04a5a915c267] Running
	I0410 22:54:10.309602   58186 system_pods.go:89] "metrics-server-57f55c9bc5-9mrmz" [a4ccd29a-d27e-4291-ac8c-3135d65f8a2a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:54:10.309607   58186 system_pods.go:89] "storage-provisioner" [8ad8e533-69ca-4eb5-9595-e6808dc0ff1a] Running
	I0410 22:54:10.309617   58186 system_pods.go:126] duration metric: took 206.014442ms to wait for k8s-apps to be running ...
	I0410 22:54:10.309624   58186 system_svc.go:44] waiting for kubelet service to be running ....
	I0410 22:54:10.309666   58186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:54:10.324614   58186 system_svc.go:56] duration metric: took 14.97975ms WaitForService to wait for kubelet
	I0410 22:54:10.324651   58186 kubeadm.go:576] duration metric: took 4.702277594s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 22:54:10.324669   58186 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:54:10.503911   58186 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:54:10.503939   58186 node_conditions.go:123] node cpu capacity is 2
	I0410 22:54:10.503949   58186 node_conditions.go:105] duration metric: took 179.27538ms to run NodePressure ...
	I0410 22:54:10.503959   58186 start.go:240] waiting for startup goroutines ...
	I0410 22:54:10.503966   58186 start.go:245] waiting for cluster config update ...
	I0410 22:54:10.503975   58186 start.go:254] writing updated cluster config ...
	I0410 22:54:10.504242   58186 ssh_runner.go:195] Run: rm -f paused
	I0410 22:54:10.555500   58186 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0410 22:54:10.557941   58186 out.go:177] * Done! kubectl is now configured to use "embed-certs-706500" cluster and "default" namespace by default
	I0410 22:54:37.664290   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:54:37.664604   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:54:37.664634   57719 kubeadm.go:309] 
	I0410 22:54:37.664776   57719 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0410 22:54:37.664843   57719 kubeadm.go:309] 		timed out waiting for the condition
	I0410 22:54:37.664854   57719 kubeadm.go:309] 
	I0410 22:54:37.664901   57719 kubeadm.go:309] 	This error is likely caused by:
	I0410 22:54:37.664968   57719 kubeadm.go:309] 		- The kubelet is not running
	I0410 22:54:37.665086   57719 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0410 22:54:37.665101   57719 kubeadm.go:309] 
	I0410 22:54:37.665245   57719 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0410 22:54:37.665313   57719 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0410 22:54:37.665360   57719 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0410 22:54:37.665372   57719 kubeadm.go:309] 
	I0410 22:54:37.665579   57719 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0410 22:54:37.665695   57719 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0410 22:54:37.665707   57719 kubeadm.go:309] 
	I0410 22:54:37.665868   57719 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0410 22:54:37.666063   57719 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0410 22:54:37.666192   57719 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0410 22:54:37.666272   57719 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0410 22:54:37.666284   57719 kubeadm.go:309] 
	I0410 22:54:37.667202   57719 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 22:54:37.667329   57719 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0410 22:54:37.667420   57719 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0410 22:54:37.667555   57719 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0410 22:54:37.667623   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0410 22:54:40.975782   57270 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.080419546s)
	I0410 22:54:40.975854   57270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:54:40.993677   57270 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0410 22:54:41.006185   57270 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:54:41.016820   57270 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:54:41.016850   57270 kubeadm.go:156] found existing configuration files:
	
	I0410 22:54:41.016985   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:54:41.026802   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:54:41.026871   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:54:41.036992   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:54:41.046896   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:54:41.046962   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:54:41.057184   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:54:41.067261   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:54:41.067321   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:54:41.077846   57270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:54:41.087745   57270 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:54:41.087795   57270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:54:41.098660   57270 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 22:54:41.159736   57270 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-rc.1
	I0410 22:54:41.159807   57270 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 22:54:41.316137   57270 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 22:54:41.316279   57270 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 22:54:41.316446   57270 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 22:54:41.559720   57270 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 22:54:41.561946   57270 out.go:204]   - Generating certificates and keys ...
	I0410 22:54:41.562039   57270 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 22:54:41.562141   57270 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 22:54:41.562211   57270 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0410 22:54:41.562275   57270 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0410 22:54:41.562352   57270 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0410 22:54:41.562460   57270 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0410 22:54:41.562572   57270 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0410 22:54:41.562667   57270 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0410 22:54:41.562803   57270 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0410 22:54:41.562917   57270 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0410 22:54:41.562992   57270 kubeadm.go:309] [certs] Using the existing "sa" key
	I0410 22:54:41.563081   57270 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 22:54:41.723729   57270 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 22:54:41.834274   57270 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0410 22:54:41.936758   57270 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 22:54:42.038298   57270 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 22:54:42.229459   57270 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 22:54:42.230047   57270 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 22:54:42.233021   57270 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 22:54:42.236068   57270 out.go:204]   - Booting up control plane ...
	I0410 22:54:42.236197   57270 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 22:54:42.236303   57270 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 22:54:42.236421   57270 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 22:54:42.255487   57270 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 22:54:42.256345   57270 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 22:54:42.256450   57270 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 22:54:42.391623   57270 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0410 22:54:42.391736   57270 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0410 22:54:43.393825   57270 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.00265832s
	I0410 22:54:43.393973   57270 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0410 22:54:43.156141   57719 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.488487447s)
	I0410 22:54:43.156227   57719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:54:43.170709   57719 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0410 22:54:43.180624   57719 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0410 22:54:43.180647   57719 kubeadm.go:156] found existing configuration files:
	
	I0410 22:54:43.180701   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0410 22:54:43.190482   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0410 22:54:43.190533   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0410 22:54:43.200261   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0410 22:54:43.210061   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0410 22:54:43.210116   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0410 22:54:43.220430   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0410 22:54:43.230810   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0410 22:54:43.230877   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0410 22:54:43.241141   57719 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0410 22:54:43.251043   57719 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0410 22:54:43.251111   57719 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0410 22:54:43.261163   57719 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0410 22:54:43.534002   57719 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 22:54:48.398196   57270 kubeadm.go:309] [api-check] The API server is healthy after 5.002218646s
	I0410 22:54:48.410618   57270 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0410 22:54:48.430553   57270 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0410 22:54:48.465343   57270 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0410 22:54:48.465614   57270 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-646133 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0410 22:54:48.489066   57270 kubeadm.go:309] [bootstrap-token] Using token: 14xwwp.uyth37qsjfn0mpcx
	I0410 22:54:48.490984   57270 out.go:204]   - Configuring RBAC rules ...
	I0410 22:54:48.491116   57270 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0410 22:54:48.502789   57270 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0410 22:54:48.516871   57270 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0410 22:54:48.523600   57270 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0410 22:54:48.527939   57270 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0410 22:54:48.537216   57270 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0410 22:54:48.806350   57270 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0410 22:54:49.234618   57270 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0410 22:54:49.803640   57270 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0410 22:54:49.804948   57270 kubeadm.go:309] 
	I0410 22:54:49.805074   57270 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0410 22:54:49.805095   57270 kubeadm.go:309] 
	I0410 22:54:49.805194   57270 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0410 22:54:49.805209   57270 kubeadm.go:309] 
	I0410 22:54:49.805240   57270 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0410 22:54:49.805323   57270 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0410 22:54:49.805403   57270 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0410 22:54:49.805415   57270 kubeadm.go:309] 
	I0410 22:54:49.805482   57270 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0410 22:54:49.805489   57270 kubeadm.go:309] 
	I0410 22:54:49.805562   57270 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0410 22:54:49.805580   57270 kubeadm.go:309] 
	I0410 22:54:49.805646   57270 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0410 22:54:49.805781   57270 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0410 22:54:49.805888   57270 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0410 22:54:49.805901   57270 kubeadm.go:309] 
	I0410 22:54:49.806038   57270 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0410 22:54:49.806143   57270 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0410 22:54:49.806154   57270 kubeadm.go:309] 
	I0410 22:54:49.806262   57270 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 14xwwp.uyth37qsjfn0mpcx \
	I0410 22:54:49.806398   57270 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f23510b5ac28f8f42919becc6f4945068a05a3fcf79470ca514b0af3de7a2bb0 \
	I0410 22:54:49.806438   57270 kubeadm.go:309] 	--control-plane 
	I0410 22:54:49.806456   57270 kubeadm.go:309] 
	I0410 22:54:49.806565   57270 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0410 22:54:49.806581   57270 kubeadm.go:309] 
	I0410 22:54:49.806661   57270 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 14xwwp.uyth37qsjfn0mpcx \
	I0410 22:54:49.806777   57270 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f23510b5ac28f8f42919becc6f4945068a05a3fcf79470ca514b0af3de7a2bb0 
	I0410 22:54:49.808385   57270 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0410 22:54:49.808455   57270 cni.go:84] Creating CNI manager for ""
	I0410 22:54:49.808473   57270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 22:54:49.811276   57270 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0410 22:54:49.812840   57270 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0410 22:54:49.829865   57270 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0410 22:54:49.854383   57270 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0410 22:54:49.854454   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:49.854456   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-646133 minikube.k8s.io/updated_at=2024_04_10T22_54_49_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e101a833c5a782975ee1da694982099ed32984f2 minikube.k8s.io/name=no-preload-646133 minikube.k8s.io/primary=true
	I0410 22:54:49.888254   57270 ops.go:34] apiserver oom_adj: -16
	I0410 22:54:50.073922   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:50.574248   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:51.074134   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:51.574654   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:52.074970   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:52.574248   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:53.074799   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:53.574902   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:54.074695   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:54.574038   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:55.074975   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:55.574297   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:56.074490   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:56.574490   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:57.074280   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:57.574569   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:58.074654   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:58.574740   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:59.074630   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:54:59.574546   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:00.075044   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:00.574740   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:01.074961   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:01.574004   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:02.074121   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:02.574476   57270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0410 22:55:02.705604   57270 kubeadm.go:1107] duration metric: took 12.851213125s to wait for elevateKubeSystemPrivileges
	W0410 22:55:02.705636   57270 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0410 22:55:02.705644   57270 kubeadm.go:393] duration metric: took 5m16.306442396s to StartCluster
	I0410 22:55:02.705660   57270 settings.go:142] acquiring lock: {Name:mk5dc8e9a07a91433645b19ffba859d70c73be71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:55:02.705739   57270 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:55:02.707592   57270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/kubeconfig: {Name:mkd6f498bec10d7b0d2291f83f5e27766227bc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 22:55:02.707844   57270 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.17 Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0410 22:55:02.709479   57270 out.go:177] * Verifying Kubernetes components...
	I0410 22:55:02.707944   57270 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0410 22:55:02.708074   57270 config.go:182] Loaded profile config "no-preload-646133": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.1
	I0410 22:55:02.710816   57270 addons.go:69] Setting storage-provisioner=true in profile "no-preload-646133"
	I0410 22:55:02.710827   57270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0410 22:55:02.710854   57270 addons.go:234] Setting addon storage-provisioner=true in "no-preload-646133"
	W0410 22:55:02.710865   57270 addons.go:243] addon storage-provisioner should already be in state true
	I0410 22:55:02.710889   57270 host.go:66] Checking if "no-preload-646133" exists ...
	I0410 22:55:02.710819   57270 addons.go:69] Setting default-storageclass=true in profile "no-preload-646133"
	I0410 22:55:02.710975   57270 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-646133"
	I0410 22:55:02.710821   57270 addons.go:69] Setting metrics-server=true in profile "no-preload-646133"
	I0410 22:55:02.711079   57270 addons.go:234] Setting addon metrics-server=true in "no-preload-646133"
	W0410 22:55:02.711090   57270 addons.go:243] addon metrics-server should already be in state true
	I0410 22:55:02.711119   57270 host.go:66] Checking if "no-preload-646133" exists ...
	I0410 22:55:02.711325   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.711349   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.711352   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.711382   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.711486   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.711507   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.729696   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44631
	I0410 22:55:02.730179   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.730725   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.730751   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.731138   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35903
	I0410 22:55:02.731161   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43589
	I0410 22:55:02.731223   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.731532   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.731551   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.731920   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.731951   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.732083   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.732103   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.732266   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.732290   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.732642   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.732692   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.732892   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetState
	I0410 22:55:02.733291   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.733336   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.737245   57270 addons.go:234] Setting addon default-storageclass=true in "no-preload-646133"
	W0410 22:55:02.737274   57270 addons.go:243] addon default-storageclass should already be in state true
	I0410 22:55:02.737304   57270 host.go:66] Checking if "no-preload-646133" exists ...
	I0410 22:55:02.737674   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.737710   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.749656   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40775
	I0410 22:55:02.750133   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.751030   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.751054   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.751467   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.751642   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetState
	I0410 22:55:02.752548   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37217
	I0410 22:55:02.753119   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.753727   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:55:02.753903   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.753918   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.755963   57270 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0410 22:55:02.754443   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.757499   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42427
	I0410 22:55:02.757548   57270 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0410 22:55:02.757559   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0410 22:55:02.757576   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:55:02.757684   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetState
	I0410 22:55:02.758428   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.758880   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.758893   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.759783   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.760197   57270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:55:02.760224   57270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:55:02.760379   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:55:02.762291   57270 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0410 22:55:02.761210   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.761741   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:55:02.763819   57270 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:55:02.763907   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0410 22:55:02.763918   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:55:02.763841   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:55:02.763963   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.764040   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:55:02.764153   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:55:02.764239   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:55:02.767729   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.767758   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:55:02.767776   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.767730   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:55:02.767951   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:55:02.768100   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:55:02.768223   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:55:02.782788   57270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39421
	I0410 22:55:02.783161   57270 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:55:02.783701   57270 main.go:141] libmachine: Using API Version  1
	I0410 22:55:02.783726   57270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:55:02.784081   57270 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:55:02.784347   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetState
	I0410 22:55:02.785932   57270 main.go:141] libmachine: (no-preload-646133) Calling .DriverName
	I0410 22:55:02.786186   57270 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0410 22:55:02.786200   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0410 22:55:02.786217   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHHostname
	I0410 22:55:02.789193   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.789526   57270 main.go:141] libmachine: (no-preload-646133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:62:0e", ip: ""} in network mk-no-preload-646133: {Iface:virbr2 ExpiryTime:2024-04-10 23:49:20 +0000 UTC Type:0 Mac:52:54:00:35:62:0e Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:no-preload-646133 Clientid:01:52:54:00:35:62:0e}
	I0410 22:55:02.789576   57270 main.go:141] libmachine: (no-preload-646133) DBG | domain no-preload-646133 has defined IP address 192.168.50.17 and MAC address 52:54:00:35:62:0e in network mk-no-preload-646133
	I0410 22:55:02.789837   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHPort
	I0410 22:55:02.790096   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHKeyPath
	I0410 22:55:02.790278   57270 main.go:141] libmachine: (no-preload-646133) Calling .GetSSHUsername
	I0410 22:55:02.790431   57270 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/no-preload-646133/id_rsa Username:docker}
	I0410 22:55:02.922239   57270 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0410 22:55:02.957665   57270 node_ready.go:35] waiting up to 6m0s for node "no-preload-646133" to be "Ready" ...
	I0410 22:55:02.981427   57270 node_ready.go:49] node "no-preload-646133" has status "Ready":"True"
	I0410 22:55:02.981449   57270 node_ready.go:38] duration metric: took 23.75134ms for node "no-preload-646133" to be "Ready" ...
	I0410 22:55:02.981458   57270 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:55:02.986557   57270 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jm2zw" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:03.024992   57270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0410 22:55:03.032744   57270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0410 22:55:03.156968   57270 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0410 22:55:03.156989   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0410 22:55:03.237497   57270 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0410 22:55:03.237522   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0410 22:55:03.274982   57270 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:55:03.275005   57270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0410 22:55:03.317464   57270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0410 22:55:03.512107   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.512130   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.512173   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.512198   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.512435   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.512455   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.512525   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.512530   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.512541   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.512542   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.512538   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.512551   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.512558   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.512497   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.512782   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.512799   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.512876   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.512915   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.512878   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.525688   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.525707   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.526017   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.526042   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.526057   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.905597   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.905627   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.906016   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.906081   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.906089   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.906101   57270 main.go:141] libmachine: Making call to close driver server
	I0410 22:55:03.906107   57270 main.go:141] libmachine: (no-preload-646133) Calling .Close
	I0410 22:55:03.906353   57270 main.go:141] libmachine: (no-preload-646133) DBG | Closing plugin on server side
	I0410 22:55:03.906355   57270 main.go:141] libmachine: Successfully made call to close driver server
	I0410 22:55:03.906381   57270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0410 22:55:03.906392   57270 addons.go:470] Verifying addon metrics-server=true in "no-preload-646133"
	I0410 22:55:03.908467   57270 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0410 22:55:03.910238   57270 addons.go:505] duration metric: took 1.20230017s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0410 22:55:05.035855   57270 pod_ready.go:102] pod "coredns-7db6d8ff4d-jm2zw" in "kube-system" namespace has status "Ready":"False"
	I0410 22:55:05.493330   57270 pod_ready.go:92] pod "coredns-7db6d8ff4d-jm2zw" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.493354   57270 pod_ready.go:81] duration metric: took 2.506773848s for pod "coredns-7db6d8ff4d-jm2zw" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.493365   57270 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-v599p" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.498568   57270 pod_ready.go:92] pod "coredns-7db6d8ff4d-v599p" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.498593   57270 pod_ready.go:81] duration metric: took 5.220548ms for pod "coredns-7db6d8ff4d-v599p" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.498604   57270 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.505133   57270 pod_ready.go:92] pod "etcd-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.505156   57270 pod_ready.go:81] duration metric: took 6.544104ms for pod "etcd-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.505165   57270 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.510391   57270 pod_ready.go:92] pod "kube-apiserver-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.510415   57270 pod_ready.go:81] duration metric: took 5.2417ms for pod "kube-apiserver-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.510427   57270 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.524717   57270 pod_ready.go:92] pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.524737   57270 pod_ready.go:81] duration metric: took 14.302445ms for pod "kube-controller-manager-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.524747   57270 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-24vhc" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.891005   57270 pod_ready.go:92] pod "kube-proxy-24vhc" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:05.891029   57270 pod_ready.go:81] duration metric: took 366.275947ms for pod "kube-proxy-24vhc" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:05.891039   57270 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:06.291050   57270 pod_ready.go:92] pod "kube-scheduler-no-preload-646133" in "kube-system" namespace has status "Ready":"True"
	I0410 22:55:06.291075   57270 pod_ready.go:81] duration metric: took 400.028808ms for pod "kube-scheduler-no-preload-646133" in "kube-system" namespace to be "Ready" ...
	I0410 22:55:06.291084   57270 pod_ready.go:38] duration metric: took 3.309617471s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0410 22:55:06.291101   57270 api_server.go:52] waiting for apiserver process to appear ...
	I0410 22:55:06.291165   57270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:55:06.308433   57270 api_server.go:72] duration metric: took 3.600549626s to wait for apiserver process to appear ...
	I0410 22:55:06.308461   57270 api_server.go:88] waiting for apiserver healthz status ...
	I0410 22:55:06.308479   57270 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
	I0410 22:55:06.312630   57270 api_server.go:279] https://192.168.50.17:8443/healthz returned 200:
	ok
	I0410 22:55:06.313434   57270 api_server.go:141] control plane version: v1.30.0-rc.1
	I0410 22:55:06.313457   57270 api_server.go:131] duration metric: took 4.989017ms to wait for apiserver health ...
	I0410 22:55:06.313466   57270 system_pods.go:43] waiting for kube-system pods to appear ...
	I0410 22:55:06.494780   57270 system_pods.go:59] 9 kube-system pods found
	I0410 22:55:06.494813   57270 system_pods.go:61] "coredns-7db6d8ff4d-jm2zw" [9d8b995c-717e-43a5-a963-f07a4f7a76a8] Running
	I0410 22:55:06.494820   57270 system_pods.go:61] "coredns-7db6d8ff4d-v599p" [f30c2827-5930-41d4-82b7-edfb839b3a74] Running
	I0410 22:55:06.494826   57270 system_pods.go:61] "etcd-no-preload-646133" [43f97c7f-c75c-4af4-80c1-11194210d8dd] Running
	I0410 22:55:06.494833   57270 system_pods.go:61] "kube-apiserver-no-preload-646133" [ca38242e-c714-49f7-a2df-3f26c6c37d44] Running
	I0410 22:55:06.494838   57270 system_pods.go:61] "kube-controller-manager-no-preload-646133" [a4c79943-eacf-46a5-b57a-f262c7dc97ef] Running
	I0410 22:55:06.494843   57270 system_pods.go:61] "kube-proxy-24vhc" [ca175e85-76f2-47d2-91a5-0248194a88e8] Running
	I0410 22:55:06.494848   57270 system_pods.go:61] "kube-scheduler-no-preload-646133" [fb5f38f5-0c9d-4176-8b3e-4d8c5f71c5cf] Running
	I0410 22:55:06.494856   57270 system_pods.go:61] "metrics-server-569cc877fc-bj59f" [4aace435-90be-456a-8a85-dbee0026212c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:55:06.494862   57270 system_pods.go:61] "storage-provisioner" [3232daa9-da88-4152-97c8-e86b3d50b0b8] Running
	I0410 22:55:06.494871   57270 system_pods.go:74] duration metric: took 181.399385ms to wait for pod list to return data ...
	I0410 22:55:06.494890   57270 default_sa.go:34] waiting for default service account to be created ...
	I0410 22:55:06.690158   57270 default_sa.go:45] found service account: "default"
	I0410 22:55:06.690185   57270 default_sa.go:55] duration metric: took 195.289153ms for default service account to be created ...
	I0410 22:55:06.690194   57270 system_pods.go:116] waiting for k8s-apps to be running ...
	I0410 22:55:06.893604   57270 system_pods.go:86] 9 kube-system pods found
	I0410 22:55:06.893632   57270 system_pods.go:89] "coredns-7db6d8ff4d-jm2zw" [9d8b995c-717e-43a5-a963-f07a4f7a76a8] Running
	I0410 22:55:06.893638   57270 system_pods.go:89] "coredns-7db6d8ff4d-v599p" [f30c2827-5930-41d4-82b7-edfb839b3a74] Running
	I0410 22:55:06.893642   57270 system_pods.go:89] "etcd-no-preload-646133" [43f97c7f-c75c-4af4-80c1-11194210d8dd] Running
	I0410 22:55:06.893646   57270 system_pods.go:89] "kube-apiserver-no-preload-646133" [ca38242e-c714-49f7-a2df-3f26c6c37d44] Running
	I0410 22:55:06.893651   57270 system_pods.go:89] "kube-controller-manager-no-preload-646133" [a4c79943-eacf-46a5-b57a-f262c7dc97ef] Running
	I0410 22:55:06.893656   57270 system_pods.go:89] "kube-proxy-24vhc" [ca175e85-76f2-47d2-91a5-0248194a88e8] Running
	I0410 22:55:06.893659   57270 system_pods.go:89] "kube-scheduler-no-preload-646133" [fb5f38f5-0c9d-4176-8b3e-4d8c5f71c5cf] Running
	I0410 22:55:06.893665   57270 system_pods.go:89] "metrics-server-569cc877fc-bj59f" [4aace435-90be-456a-8a85-dbee0026212c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0410 22:55:06.893670   57270 system_pods.go:89] "storage-provisioner" [3232daa9-da88-4152-97c8-e86b3d50b0b8] Running
	I0410 22:55:06.893679   57270 system_pods.go:126] duration metric: took 203.480657ms to wait for k8s-apps to be running ...
	I0410 22:55:06.893686   57270 system_svc.go:44] waiting for kubelet service to be running ....
	I0410 22:55:06.893730   57270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:55:06.909072   57270 system_svc.go:56] duration metric: took 15.374403ms WaitForService to wait for kubelet
	I0410 22:55:06.909096   57270 kubeadm.go:576] duration metric: took 4.20122533s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0410 22:55:06.909115   57270 node_conditions.go:102] verifying NodePressure condition ...
	I0410 22:55:07.090651   57270 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0410 22:55:07.090673   57270 node_conditions.go:123] node cpu capacity is 2
	I0410 22:55:07.090682   57270 node_conditions.go:105] duration metric: took 181.563241ms to run NodePressure ...
	I0410 22:55:07.090692   57270 start.go:240] waiting for startup goroutines ...
	I0410 22:55:07.090698   57270 start.go:245] waiting for cluster config update ...
	I0410 22:55:07.090707   57270 start.go:254] writing updated cluster config ...
	I0410 22:55:07.090957   57270 ssh_runner.go:195] Run: rm -f paused
	I0410 22:55:07.140644   57270 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.1 (minor skew: 1)
	I0410 22:55:07.142770   57270 out.go:177] * Done! kubectl is now configured to use "no-preload-646133" cluster and "default" namespace by default
	I0410 22:56:40.435994   57719 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0410 22:56:40.436123   57719 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0410 22:56:40.437810   57719 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0410 22:56:40.437872   57719 kubeadm.go:309] [preflight] Running pre-flight checks
	I0410 22:56:40.437967   57719 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0410 22:56:40.438082   57719 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0410 22:56:40.438235   57719 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0410 22:56:40.438321   57719 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0410 22:56:40.440009   57719 out.go:204]   - Generating certificates and keys ...
	I0410 22:56:40.440110   57719 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0410 22:56:40.440210   57719 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0410 22:56:40.440336   57719 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0410 22:56:40.440417   57719 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0410 22:56:40.440501   57719 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0410 22:56:40.440563   57719 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0410 22:56:40.440622   57719 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0410 22:56:40.440685   57719 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0410 22:56:40.440752   57719 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0410 22:56:40.440858   57719 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0410 22:56:40.440923   57719 kubeadm.go:309] [certs] Using the existing "sa" key
	I0410 22:56:40.441004   57719 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0410 22:56:40.441076   57719 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0410 22:56:40.441131   57719 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0410 22:56:40.441185   57719 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0410 22:56:40.441242   57719 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0410 22:56:40.441375   57719 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0410 22:56:40.441501   57719 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0410 22:56:40.441565   57719 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0410 22:56:40.441658   57719 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0410 22:56:40.443122   57719 out.go:204]   - Booting up control plane ...
	I0410 22:56:40.443230   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0410 22:56:40.443332   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0410 22:56:40.443431   57719 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0410 22:56:40.443549   57719 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0410 22:56:40.443710   57719 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0410 22:56:40.443783   57719 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0410 22:56:40.443883   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.444111   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.444200   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.444429   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.444520   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.444761   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.444869   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.445124   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.445235   57719 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0410 22:56:40.445416   57719 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0410 22:56:40.445423   57719 kubeadm.go:309] 
	I0410 22:56:40.445465   57719 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0410 22:56:40.445512   57719 kubeadm.go:309] 		timed out waiting for the condition
	I0410 22:56:40.445520   57719 kubeadm.go:309] 
	I0410 22:56:40.445548   57719 kubeadm.go:309] 	This error is likely caused by:
	I0410 22:56:40.445595   57719 kubeadm.go:309] 		- The kubelet is not running
	I0410 22:56:40.445712   57719 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0410 22:56:40.445722   57719 kubeadm.go:309] 
	I0410 22:56:40.445880   57719 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0410 22:56:40.445931   57719 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0410 22:56:40.445967   57719 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0410 22:56:40.445972   57719 kubeadm.go:309] 
	I0410 22:56:40.446095   57719 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0410 22:56:40.446190   57719 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0410 22:56:40.446201   57719 kubeadm.go:309] 
	I0410 22:56:40.446326   57719 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0410 22:56:40.446452   57719 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0410 22:56:40.446548   57719 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0410 22:56:40.446611   57719 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0410 22:56:40.446659   57719 kubeadm.go:309] 
	I0410 22:56:40.446681   57719 kubeadm.go:393] duration metric: took 8m5.163157284s to StartCluster
	I0410 22:56:40.446805   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0410 22:56:40.446880   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0410 22:56:40.499163   57719 cri.go:89] found id: ""
	I0410 22:56:40.499196   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.499205   57719 logs.go:278] No container was found matching "kube-apiserver"
	I0410 22:56:40.499212   57719 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0410 22:56:40.499292   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0410 22:56:40.545429   57719 cri.go:89] found id: ""
	I0410 22:56:40.545465   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.545473   57719 logs.go:278] No container was found matching "etcd"
	I0410 22:56:40.545479   57719 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0410 22:56:40.545538   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0410 22:56:40.583842   57719 cri.go:89] found id: ""
	I0410 22:56:40.583870   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.583880   57719 logs.go:278] No container was found matching "coredns"
	I0410 22:56:40.583887   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0410 22:56:40.583957   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0410 22:56:40.621054   57719 cri.go:89] found id: ""
	I0410 22:56:40.621075   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.621083   57719 logs.go:278] No container was found matching "kube-scheduler"
	I0410 22:56:40.621091   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0410 22:56:40.621149   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0410 22:56:40.665133   57719 cri.go:89] found id: ""
	I0410 22:56:40.665161   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.665168   57719 logs.go:278] No container was found matching "kube-proxy"
	I0410 22:56:40.665175   57719 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0410 22:56:40.665231   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0410 22:56:40.707490   57719 cri.go:89] found id: ""
	I0410 22:56:40.707519   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.707529   57719 logs.go:278] No container was found matching "kube-controller-manager"
	I0410 22:56:40.707536   57719 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0410 22:56:40.707598   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0410 22:56:40.748539   57719 cri.go:89] found id: ""
	I0410 22:56:40.748565   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.748576   57719 logs.go:278] No container was found matching "kindnet"
	I0410 22:56:40.748584   57719 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0410 22:56:40.748644   57719 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0410 22:56:40.792326   57719 cri.go:89] found id: ""
	I0410 22:56:40.792349   57719 logs.go:276] 0 containers: []
	W0410 22:56:40.792358   57719 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0410 22:56:40.792366   57719 logs.go:123] Gathering logs for kubelet ...
	I0410 22:56:40.792376   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0410 22:56:40.844309   57719 logs.go:123] Gathering logs for dmesg ...
	I0410 22:56:40.844346   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0410 22:56:40.859678   57719 logs.go:123] Gathering logs for describe nodes ...
	I0410 22:56:40.859715   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0410 22:56:40.950099   57719 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0410 22:56:40.950123   57719 logs.go:123] Gathering logs for CRI-O ...
	I0410 22:56:40.950141   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0410 22:56:41.073547   57719 logs.go:123] Gathering logs for container status ...
	I0410 22:56:41.073589   57719 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0410 22:56:41.124970   57719 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0410 22:56:41.125024   57719 out.go:239] * 
	W0410 22:56:41.125096   57719 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0410 22:56:41.125129   57719 out.go:239] * 
	W0410 22:56:41.126153   57719 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0410 22:56:41.129869   57719 out.go:177] 
	W0410 22:56:41.131207   57719 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0410 22:56:41.131286   57719 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0410 22:56:41.131326   57719 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0410 22:56:41.133049   57719 out.go:177] 
	
	
	==> CRI-O <==
	Apr 10 23:07:56 old-k8s-version-862528 crio[650]: time="2024-04-10 23:07:56.793339686Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712790476793314733,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8ef4aa74-5815-4728-a87b-0670442d0df3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:07:56 old-k8s-version-862528 crio[650]: time="2024-04-10 23:07:56.794231713Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e5b5372b-9308-4437-80c2-f8bdfc300c3b name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:07:56 old-k8s-version-862528 crio[650]: time="2024-04-10 23:07:56.794308859Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e5b5372b-9308-4437-80c2-f8bdfc300c3b name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:07:56 old-k8s-version-862528 crio[650]: time="2024-04-10 23:07:56.794350457Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e5b5372b-9308-4437-80c2-f8bdfc300c3b name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:07:56 old-k8s-version-862528 crio[650]: time="2024-04-10 23:07:56.830020596Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=143f6999-9eee-4b41-8473-8d549f777b4e name=/runtime.v1.RuntimeService/Version
	Apr 10 23:07:56 old-k8s-version-862528 crio[650]: time="2024-04-10 23:07:56.830139819Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=143f6999-9eee-4b41-8473-8d549f777b4e name=/runtime.v1.RuntimeService/Version
	Apr 10 23:07:56 old-k8s-version-862528 crio[650]: time="2024-04-10 23:07:56.831502427Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a316d20f-2079-4896-bdc7-56106b698d10 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:07:56 old-k8s-version-862528 crio[650]: time="2024-04-10 23:07:56.831910798Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712790476831890558,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a316d20f-2079-4896-bdc7-56106b698d10 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:07:56 old-k8s-version-862528 crio[650]: time="2024-04-10 23:07:56.832497462Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4effe2b3-cc09-4e70-ae98-3376d83b6ed3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:07:56 old-k8s-version-862528 crio[650]: time="2024-04-10 23:07:56.832576513Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4effe2b3-cc09-4e70-ae98-3376d83b6ed3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:07:56 old-k8s-version-862528 crio[650]: time="2024-04-10 23:07:56.832618451Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4effe2b3-cc09-4e70-ae98-3376d83b6ed3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:07:56 old-k8s-version-862528 crio[650]: time="2024-04-10 23:07:56.867347593Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=51601f46-0b0e-426a-95af-beddb6afd1c1 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:07:56 old-k8s-version-862528 crio[650]: time="2024-04-10 23:07:56.867446531Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=51601f46-0b0e-426a-95af-beddb6afd1c1 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:07:56 old-k8s-version-862528 crio[650]: time="2024-04-10 23:07:56.868506114Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=91cee305-040d-4757-bdde-e712f4978afb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:07:56 old-k8s-version-862528 crio[650]: time="2024-04-10 23:07:56.868959470Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712790476868935794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=91cee305-040d-4757-bdde-e712f4978afb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:07:56 old-k8s-version-862528 crio[650]: time="2024-04-10 23:07:56.869509953Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ffe33b0-3e0f-4d2d-aa7d-f9515588f729 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:07:56 old-k8s-version-862528 crio[650]: time="2024-04-10 23:07:56.869619367Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ffe33b0-3e0f-4d2d-aa7d-f9515588f729 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:07:56 old-k8s-version-862528 crio[650]: time="2024-04-10 23:07:56.869671799Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3ffe33b0-3e0f-4d2d-aa7d-f9515588f729 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:07:56 old-k8s-version-862528 crio[650]: time="2024-04-10 23:07:56.910118098Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3bd9d308-01ac-4a61-ad72-77c0e71605e4 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:07:56 old-k8s-version-862528 crio[650]: time="2024-04-10 23:07:56.910293242Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3bd9d308-01ac-4a61-ad72-77c0e71605e4 name=/runtime.v1.RuntimeService/Version
	Apr 10 23:07:56 old-k8s-version-862528 crio[650]: time="2024-04-10 23:07:56.911751300Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b21e9cdf-7004-47fc-9a44-895ca7f321f3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:07:56 old-k8s-version-862528 crio[650]: time="2024-04-10 23:07:56.912124970Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712790476912100347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b21e9cdf-7004-47fc-9a44-895ca7f321f3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 10 23:07:56 old-k8s-version-862528 crio[650]: time="2024-04-10 23:07:56.912856817Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa54efc5-8715-426b-bcc1-06e264935c18 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:07:56 old-k8s-version-862528 crio[650]: time="2024-04-10 23:07:56.912953823Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fa54efc5-8715-426b-bcc1-06e264935c18 name=/runtime.v1.RuntimeService/ListContainers
	Apr 10 23:07:56 old-k8s-version-862528 crio[650]: time="2024-04-10 23:07:56.913005266Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fa54efc5-8715-426b-bcc1-06e264935c18 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr10 22:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052439] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041651] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.553485] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.712541] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.654645] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.367023] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.061213] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068973] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.198082] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.121287] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.251878] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +6.515656] systemd-fstab-generator[837]: Ignoring "noauto" option for root device
	[  +0.064093] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.589961] systemd-fstab-generator[961]: Ignoring "noauto" option for root device
	[ +11.062720] kauditd_printk_skb: 46 callbacks suppressed
	[Apr10 22:52] systemd-fstab-generator[4966]: Ignoring "noauto" option for root device
	[Apr10 22:54] systemd-fstab-generator[5254]: Ignoring "noauto" option for root device
	[  +0.070219] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 23:07:57 up 19 min,  0 users,  load average: 0.04, 0.03, 0.03
	Linux old-k8s-version-862528 5.10.207 #1 SMP Wed Apr 10 14:57:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 10 23:07:55 old-k8s-version-862528 kubelet[6728]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Apr 10 23:07:55 old-k8s-version-862528 kubelet[6728]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000bb59e0, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000bf6060, 0x24, 0x0, ...)
	Apr 10 23:07:55 old-k8s-version-862528 kubelet[6728]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Apr 10 23:07:55 old-k8s-version-862528 kubelet[6728]: net.(*Dialer).DialContext(0xc000a95980, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000bf6060, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 10 23:07:55 old-k8s-version-862528 kubelet[6728]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Apr 10 23:07:55 old-k8s-version-862528 kubelet[6728]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000aa4300, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000bf6060, 0x24, 0x60, 0x7f24bdabec00, 0x118, ...)
	Apr 10 23:07:55 old-k8s-version-862528 kubelet[6728]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Apr 10 23:07:55 old-k8s-version-862528 kubelet[6728]: net/http.(*Transport).dial(0xc000a5d900, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000bf6060, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 10 23:07:55 old-k8s-version-862528 kubelet[6728]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Apr 10 23:07:55 old-k8s-version-862528 kubelet[6728]: net/http.(*Transport).dialConn(0xc000a5d900, 0x4f7fe00, 0xc000052030, 0x0, 0xc000221b60, 0x5, 0xc000bf6060, 0x24, 0x0, 0xc00087b320, ...)
	Apr 10 23:07:55 old-k8s-version-862528 kubelet[6728]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Apr 10 23:07:55 old-k8s-version-862528 kubelet[6728]: net/http.(*Transport).dialConnFor(0xc000a5d900, 0xc000b258c0)
	Apr 10 23:07:55 old-k8s-version-862528 kubelet[6728]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Apr 10 23:07:55 old-k8s-version-862528 kubelet[6728]: created by net/http.(*Transport).queueForDial
	Apr 10 23:07:55 old-k8s-version-862528 kubelet[6728]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Apr 10 23:07:55 old-k8s-version-862528 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 10 23:07:55 old-k8s-version-862528 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 10 23:07:55 old-k8s-version-862528 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 137.
	Apr 10 23:07:55 old-k8s-version-862528 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 10 23:07:55 old-k8s-version-862528 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 10 23:07:55 old-k8s-version-862528 kubelet[6755]: I0410 23:07:55.853617    6755 server.go:416] Version: v1.20.0
	Apr 10 23:07:55 old-k8s-version-862528 kubelet[6755]: I0410 23:07:55.854007    6755 server.go:837] Client rotation is on, will bootstrap in background
	Apr 10 23:07:55 old-k8s-version-862528 kubelet[6755]: I0410 23:07:55.856859    6755 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 10 23:07:55 old-k8s-version-862528 kubelet[6755]: W0410 23:07:55.858393    6755 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 10 23:07:55 old-k8s-version-862528 kubelet[6755]: I0410 23:07:55.858628    6755 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-862528 -n old-k8s-version-862528
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-862528 -n old-k8s-version-862528: exit status 2 (249.63713ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-862528" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (130.43s)

                                                
                                    

Test pass (254/321)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 48
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.29.3/json-events 12.57
13 TestDownloadOnly/v1.29.3/preload-exists 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.07
18 TestDownloadOnly/v1.29.3/DeleteAll 0.14
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.13
22 TestDownloadOnly/v1.30.0-rc.1/preload-exists 0
26 TestDownloadOnly/v1.30.0-rc.1/LogsDuration 0.07
27 TestDownloadOnly/v1.30.0-rc.1/DeleteAll 0.14
28 TestDownloadOnly/v1.30.0-rc.1/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.56
31 TestOffline 125.48
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 149.95
38 TestAddons/parallel/Registry 83.67
40 TestAddons/parallel/InspektorGadget 11.83
41 TestAddons/parallel/MetricsServer 6.98
42 TestAddons/parallel/HelmTiller 12.2
44 TestAddons/parallel/CSI 110.51
45 TestAddons/parallel/Headlamp 17.97
46 TestAddons/parallel/CloudSpanner 6.76
47 TestAddons/parallel/LocalPath 12.44
48 TestAddons/parallel/NvidiaDevicePlugin 5.54
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.11
54 TestCertOptions 55.33
55 TestCertExpiration 627.8
57 TestForceSystemdFlag 59.52
58 TestForceSystemdEnv 45.99
60 TestKVMDriverInstallOrUpdate 4.09
64 TestErrorSpam/setup 46.31
65 TestErrorSpam/start 0.38
66 TestErrorSpam/status 0.76
67 TestErrorSpam/pause 1.62
68 TestErrorSpam/unpause 1.69
69 TestErrorSpam/stop 5.47
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 100.32
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 37.98
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.08
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.76
81 TestFunctional/serial/CacheCmd/cache/add_local 2.18
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.75
86 TestFunctional/serial/CacheCmd/cache/delete 0.12
87 TestFunctional/serial/MinikubeKubectlCmd 0.12
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
89 TestFunctional/serial/ExtraConfig 33.82
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.61
92 TestFunctional/serial/LogsFileCmd 1.54
93 TestFunctional/serial/InvalidService 4.35
95 TestFunctional/parallel/ConfigCmd 0.43
96 TestFunctional/parallel/DashboardCmd 18.59
97 TestFunctional/parallel/DryRun 0.33
98 TestFunctional/parallel/InternationalLanguage 0.17
99 TestFunctional/parallel/StatusCmd 1.33
103 TestFunctional/parallel/ServiceCmdConnect 12.68
104 TestFunctional/parallel/AddonsCmd 0.18
105 TestFunctional/parallel/PersistentVolumeClaim 50.78
107 TestFunctional/parallel/SSHCmd 0.48
108 TestFunctional/parallel/CpCmd 1.46
109 TestFunctional/parallel/MySQL 35.43
110 TestFunctional/parallel/FileSync 0.22
111 TestFunctional/parallel/CertSync 1.53
115 TestFunctional/parallel/NodeLabels 0.07
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.51
119 TestFunctional/parallel/License 0.57
129 TestFunctional/parallel/ServiceCmd/DeployApp 12.18
130 TestFunctional/parallel/ProfileCmd/profile_not_create 0.31
131 TestFunctional/parallel/ProfileCmd/profile_list 0.34
132 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
133 TestFunctional/parallel/MountCmd/any-port 8.69
134 TestFunctional/parallel/MountCmd/specific-port 2.15
135 TestFunctional/parallel/ServiceCmd/List 0.35
136 TestFunctional/parallel/ServiceCmd/JSONOutput 0.39
137 TestFunctional/parallel/MountCmd/VerifyCleanup 1.76
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
139 TestFunctional/parallel/ServiceCmd/Format 0.34
140 TestFunctional/parallel/ServiceCmd/URL 0.38
141 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
142 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
143 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
144 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
145 TestFunctional/parallel/ImageCommands/ImageBuild 3.61
146 TestFunctional/parallel/ImageCommands/Setup 2.1
147 TestFunctional/parallel/Version/short 0.07
148 TestFunctional/parallel/Version/components 0.97
149 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.87
150 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
151 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
152 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
153 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.28
154 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.63
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 4.48
156 TestFunctional/parallel/ImageCommands/ImageRemove 1.07
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 5.09
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.65
159 TestFunctional/delete_addon-resizer_images 0.07
160 TestFunctional/delete_my-image_image 0.02
161 TestFunctional/delete_minikube_cached_images 0.01
165 TestMultiControlPlane/serial/StartCluster 265.48
166 TestMultiControlPlane/serial/DeployApp 6.88
167 TestMultiControlPlane/serial/PingHostFromPods 1.39
168 TestMultiControlPlane/serial/AddWorkerNode 46.97
169 TestMultiControlPlane/serial/NodeLabels 0.07
170 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.59
171 TestMultiControlPlane/serial/CopyFile 13.83
172 TestMultiControlPlane/serial/StopSecondaryNode 3.97
173 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.42
174 TestMultiControlPlane/serial/RestartSecondaryNode 45.94
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.56
177 TestMultiControlPlane/serial/DeleteSecondaryNode 17.48
178 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.39
180 TestMultiControlPlane/serial/RestartCluster 376.35
181 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.39
182 TestMultiControlPlane/serial/AddSecondaryNode 75.71
183 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.56
187 TestJSONOutput/start/Command 53.89
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.83
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.67
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 9.55
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.21
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 94.65
219 TestMountStart/serial/StartWithMountFirst 28.38
220 TestMountStart/serial/VerifyMountFirst 0.39
221 TestMountStart/serial/StartWithMountSecond 26.41
222 TestMountStart/serial/VerifyMountSecond 0.4
223 TestMountStart/serial/DeleteFirst 0.9
224 TestMountStart/serial/VerifyMountPostDelete 0.4
225 TestMountStart/serial/Stop 1.34
226 TestMountStart/serial/RestartStopped 22.6
227 TestMountStart/serial/VerifyMountPostStop 0.39
230 TestMultiNode/serial/FreshStart2Nodes 103.71
231 TestMultiNode/serial/DeployApp2Nodes 5.65
232 TestMultiNode/serial/PingHostFrom2Pods 0.9
233 TestMultiNode/serial/AddNode 41.17
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.23
236 TestMultiNode/serial/CopyFile 7.46
237 TestMultiNode/serial/StopNode 2.44
238 TestMultiNode/serial/StartAfterStop 30.29
240 TestMultiNode/serial/DeleteNode 2.51
242 TestMultiNode/serial/RestartMultiNode 168.22
243 TestMultiNode/serial/ValidateNameConflict 47.15
250 TestScheduledStopUnix 115.8
254 TestRunningBinaryUpgrade 232.8
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
267 TestNoKubernetes/serial/StartWithK8s 95.28
275 TestNetworkPlugins/group/false 4.73
279 TestNoKubernetes/serial/StartWithStopK8s 43.54
280 TestNoKubernetes/serial/Start 53.83
282 TestPause/serial/Start 104.5
283 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
284 TestNoKubernetes/serial/ProfileList 29.27
285 TestNoKubernetes/serial/Stop 2.17
286 TestNoKubernetes/serial/StartNoArgs 25.22
287 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
288 TestStoppedBinaryUpgrade/Setup 2.38
289 TestStoppedBinaryUpgrade/Upgrade 124.36
291 TestStoppedBinaryUpgrade/MinikubeLogs 0.96
295 TestStartStop/group/no-preload/serial/FirstStart 132.75
296 TestStartStop/group/no-preload/serial/DeployApp 10.3
297 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.03
300 TestStartStop/group/embed-certs/serial/FirstStart 60.29
301 TestStartStop/group/embed-certs/serial/DeployApp 10.28
302 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.06
308 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 56.57
309 TestStartStop/group/no-preload/serial/SecondStart 717.66
310 TestStartStop/group/old-k8s-version/serial/Stop 5.46
311 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
313 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.29
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.99
317 TestStartStop/group/embed-certs/serial/SecondStart 553.86
319 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 422.11
329 TestStartStop/group/newest-cni/serial/FirstStart 59.57
330 TestNetworkPlugins/group/auto/Start 67.18
331 TestStartStop/group/newest-cni/serial/DeployApp 0
332 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.33
333 TestStartStop/group/newest-cni/serial/Stop 10.66
334 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
335 TestStartStop/group/newest-cni/serial/SecondStart 39.78
336 TestNetworkPlugins/group/auto/KubeletFlags 0.24
337 TestNetworkPlugins/group/auto/NetCatPod 11.28
338 TestNetworkPlugins/group/auto/DNS 33.78
339 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
342 TestStartStop/group/newest-cni/serial/Pause 2.61
343 TestNetworkPlugins/group/kindnet/Start 68.32
344 TestNetworkPlugins/group/auto/Localhost 0.12
345 TestNetworkPlugins/group/auto/HairPin 0.14
346 TestNetworkPlugins/group/calico/Start 95.68
347 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
348 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
349 TestNetworkPlugins/group/kindnet/NetCatPod 11.23
350 TestNetworkPlugins/group/custom-flannel/Start 91.92
351 TestNetworkPlugins/group/kindnet/DNS 0.2
352 TestNetworkPlugins/group/kindnet/Localhost 0.15
353 TestNetworkPlugins/group/kindnet/HairPin 0.14
354 TestNetworkPlugins/group/enable-default-cni/Start 81.17
355 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
356 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.21
357 TestNetworkPlugins/group/calico/ControllerPod 6.01
358 TestNetworkPlugins/group/flannel/Start 94.15
359 TestNetworkPlugins/group/calico/KubeletFlags 0.24
360 TestNetworkPlugins/group/calico/NetCatPod 11.32
361 TestNetworkPlugins/group/calico/DNS 0.23
362 TestNetworkPlugins/group/calico/Localhost 0.19
363 TestNetworkPlugins/group/calico/HairPin 0.16
364 TestNetworkPlugins/group/bridge/Start 67.23
365 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
366 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.34
367 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
368 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.24
369 TestNetworkPlugins/group/custom-flannel/DNS 0.2
370 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
371 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
372 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
373 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
374 TestNetworkPlugins/group/enable-default-cni/HairPin 0.22
375 TestNetworkPlugins/group/flannel/ControllerPod 6.01
376 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
377 TestNetworkPlugins/group/flannel/NetCatPod 11.23
378 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
379 TestNetworkPlugins/group/bridge/NetCatPod 11.27
380 TestNetworkPlugins/group/flannel/DNS 0.19
381 TestNetworkPlugins/group/flannel/Localhost 0.15
382 TestNetworkPlugins/group/flannel/HairPin 0.14
383 TestNetworkPlugins/group/bridge/DNS 0.17
384 TestNetworkPlugins/group/bridge/Localhost 0.13
385 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.20.0/json-events (48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-543401 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-543401 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (47.995985993s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (48.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-543401
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-543401: exit status 85 (69.731224ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-543401 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:27 UTC |          |
	|         | -p download-only-543401        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|         | --driver=kvm2                  |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/10 21:27:57
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0410 21:27:57.858923   13013 out.go:291] Setting OutFile to fd 1 ...
	I0410 21:27:57.859044   13013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 21:27:57.859053   13013 out.go:304] Setting ErrFile to fd 2...
	I0410 21:27:57.859057   13013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 21:27:57.859250   13013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	W0410 21:27:57.859409   13013 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18610-5679/.minikube/config/config.json: open /home/jenkins/minikube-integration/18610-5679/.minikube/config/config.json: no such file or directory
	I0410 21:27:57.859987   13013 out.go:298] Setting JSON to true
	I0410 21:27:57.860830   13013 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":620,"bootTime":1712783858,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0410 21:27:57.860893   13013 start.go:139] virtualization: kvm guest
	I0410 21:27:57.863424   13013 out.go:97] [download-only-543401] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0410 21:27:57.865057   13013 out.go:169] MINIKUBE_LOCATION=18610
	W0410 21:27:57.863532   13013 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball: no such file or directory
	I0410 21:27:57.863608   13013 notify.go:220] Checking for updates...
	I0410 21:27:57.868201   13013 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0410 21:27:57.870015   13013 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 21:27:57.871589   13013 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 21:27:57.873034   13013 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0410 21:27:57.875393   13013 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0410 21:27:57.875675   13013 driver.go:392] Setting default libvirt URI to qemu:///system
	I0410 21:27:57.975571   13013 out.go:97] Using the kvm2 driver based on user configuration
	I0410 21:27:57.975619   13013 start.go:297] selected driver: kvm2
	I0410 21:27:57.975629   13013 start.go:901] validating driver "kvm2" against <nil>
	I0410 21:27:57.976019   13013 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 21:27:57.976138   13013 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18610-5679/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0410 21:27:57.990622   13013 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0410 21:27:57.990673   13013 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0410 21:27:57.991168   13013 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0410 21:27:57.991305   13013 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0410 21:27:57.991369   13013 cni.go:84] Creating CNI manager for ""
	I0410 21:27:57.991382   13013 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 21:27:57.991390   13013 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0410 21:27:57.991439   13013 start.go:340] cluster config:
	{Name:download-only-543401 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-543401 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 21:27:57.991602   13013 iso.go:125] acquiring lock: {Name:mk5a268a8a4dbf02629446a098c1de60574dce10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 21:27:57.993465   13013 out.go:97] Downloading VM boot image ...
	I0410 21:27:57.993510   13013 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18610-5679/.minikube/cache/iso/amd64/minikube-v1.33.0-1712743565-18610-amd64.iso
	I0410 21:28:08.769905   13013 out.go:97] Starting "download-only-543401" primary control-plane node in "download-only-543401" cluster
	I0410 21:28:08.769942   13013 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0410 21:28:08.864496   13013 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0410 21:28:08.864534   13013 cache.go:56] Caching tarball of preloaded images
	I0410 21:28:08.864709   13013 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0410 21:28:08.867101   13013 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0410 21:28:08.867123   13013 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0410 21:28:08.970773   13013 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0410 21:28:21.487580   13013 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0410 21:28:21.487694   13013 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0410 21:28:22.391100   13013 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0410 21:28:22.391457   13013 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/download-only-543401/config.json ...
	I0410 21:28:22.391498   13013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/download-only-543401/config.json: {Name:mk5e6817d00af9097d1e8eb5cfbde580eac79ddf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 21:28:22.391693   13013 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0410 21:28:22.391888   13013 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18610-5679/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-543401 host does not exist
	  To start a cluster, run: "minikube start -p download-only-543401"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-543401
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (12.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-765356 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-765356 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.573511356s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (12.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-765356
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-765356: exit status 85 (72.296342ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-543401 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:27 UTC |                     |
	|         | -p download-only-543401        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	|         | --driver=kvm2                  |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:28 UTC | 10 Apr 24 21:28 UTC |
	| delete  | -p download-only-543401        | download-only-543401 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:28 UTC | 10 Apr 24 21:28 UTC |
	| start   | -o=json --download-only        | download-only-765356 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:28 UTC |                     |
	|         | -p download-only-765356        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	|         | --driver=kvm2                  |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/10 21:28:46
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0410 21:28:46.191347   13332 out.go:291] Setting OutFile to fd 1 ...
	I0410 21:28:46.191459   13332 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 21:28:46.191477   13332 out.go:304] Setting ErrFile to fd 2...
	I0410 21:28:46.191486   13332 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 21:28:46.192180   13332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 21:28:46.193375   13332 out.go:298] Setting JSON to true
	I0410 21:28:46.194175   13332 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":669,"bootTime":1712783858,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0410 21:28:46.194241   13332 start.go:139] virtualization: kvm guest
	I0410 21:28:46.196168   13332 out.go:97] [download-only-765356] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0410 21:28:46.197564   13332 out.go:169] MINIKUBE_LOCATION=18610
	I0410 21:28:46.196286   13332 notify.go:220] Checking for updates...
	I0410 21:28:46.200059   13332 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0410 21:28:46.201405   13332 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 21:28:46.202980   13332 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 21:28:46.204270   13332 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0410 21:28:46.206668   13332 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0410 21:28:46.206924   13332 driver.go:392] Setting default libvirt URI to qemu:///system
	I0410 21:28:46.239906   13332 out.go:97] Using the kvm2 driver based on user configuration
	I0410 21:28:46.239940   13332 start.go:297] selected driver: kvm2
	I0410 21:28:46.239945   13332 start.go:901] validating driver "kvm2" against <nil>
	I0410 21:28:46.240252   13332 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 21:28:46.240328   13332 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18610-5679/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0410 21:28:46.254667   13332 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0410 21:28:46.254738   13332 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0410 21:28:46.255185   13332 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0410 21:28:46.255316   13332 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0410 21:28:46.255384   13332 cni.go:84] Creating CNI manager for ""
	I0410 21:28:46.255401   13332 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 21:28:46.255409   13332 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0410 21:28:46.255453   13332 start.go:340] cluster config:
	{Name:download-only-765356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-765356 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 21:28:46.255535   13332 iso.go:125] acquiring lock: {Name:mk5a268a8a4dbf02629446a098c1de60574dce10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 21:28:46.257244   13332 out.go:97] Starting "download-only-765356" primary control-plane node in "download-only-765356" cluster
	I0410 21:28:46.257273   13332 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 21:28:46.438087   13332 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0410 21:28:46.438118   13332 cache.go:56] Caching tarball of preloaded images
	I0410 21:28:46.438278   13332 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0410 21:28:46.440308   13332 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0410 21:28:46.440321   13332 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 ...
	I0410 21:28:46.538654   13332 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:6f4e94cb6232b24c3932ab20b1ee6dad -> /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-765356 host does not exist
	  To start a cluster, run: "minikube start -p download-only-765356"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-765356
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-753930
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-753930: exit status 85 (71.732035ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-543401 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:27 UTC |                     |
	|         | -p download-only-543401           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:28 UTC | 10 Apr 24 21:28 UTC |
	| delete  | -p download-only-543401           | download-only-543401 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:28 UTC | 10 Apr 24 21:28 UTC |
	| start   | -o=json --download-only           | download-only-765356 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:28 UTC |                     |
	|         | -p download-only-765356           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3      |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:28 UTC | 10 Apr 24 21:28 UTC |
	| delete  | -p download-only-765356           | download-only-765356 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:28 UTC | 10 Apr 24 21:28 UTC |
	| start   | -o=json --download-only           | download-only-753930 | jenkins | v1.33.0-beta.0 | 10 Apr 24 21:28 UTC |                     |
	|         | -p download-only-753930           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.1 |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/10 21:28:59
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0410 21:28:59.105873   13537 out.go:291] Setting OutFile to fd 1 ...
	I0410 21:28:59.106128   13537 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 21:28:59.106139   13537 out.go:304] Setting ErrFile to fd 2...
	I0410 21:28:59.106143   13537 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 21:28:59.106326   13537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 21:28:59.106903   13537 out.go:298] Setting JSON to true
	I0410 21:28:59.107657   13537 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":681,"bootTime":1712783858,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0410 21:28:59.107721   13537 start.go:139] virtualization: kvm guest
	I0410 21:28:59.109930   13537 out.go:97] [download-only-753930] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0410 21:28:59.111740   13537 out.go:169] MINIKUBE_LOCATION=18610
	I0410 21:28:59.110122   13537 notify.go:220] Checking for updates...
	I0410 21:28:59.114782   13537 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0410 21:28:59.116422   13537 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 21:28:59.117941   13537 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 21:28:59.119425   13537 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0410 21:28:59.122102   13537 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0410 21:28:59.122345   13537 driver.go:392] Setting default libvirt URI to qemu:///system
	I0410 21:28:59.156800   13537 out.go:97] Using the kvm2 driver based on user configuration
	I0410 21:28:59.156823   13537 start.go:297] selected driver: kvm2
	I0410 21:28:59.156830   13537 start.go:901] validating driver "kvm2" against <nil>
	I0410 21:28:59.157191   13537 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 21:28:59.157269   13537 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18610-5679/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0410 21:28:59.171808   13537 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0410 21:28:59.171864   13537 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0410 21:28:59.172351   13537 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0410 21:28:59.172567   13537 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0410 21:28:59.172626   13537 cni.go:84] Creating CNI manager for ""
	I0410 21:28:59.172643   13537 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0410 21:28:59.172651   13537 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0410 21:28:59.172704   13537 start.go:340] cluster config:
	{Name:download-only-753930 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.1 ClusterName:download-only-753930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 21:28:59.172791   13537 iso.go:125] acquiring lock: {Name:mk5a268a8a4dbf02629446a098c1de60574dce10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0410 21:28:59.174625   13537 out.go:97] Starting "download-only-753930" primary control-plane node in "download-only-753930" cluster
	I0410 21:28:59.174636   13537 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.1 and runtime crio
	I0410 21:28:59.264520   13537 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.1/preloaded-images-k8s-v18-v1.30.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I0410 21:28:59.264551   13537 cache.go:56] Caching tarball of preloaded images
	I0410 21:28:59.264719   13537 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.1 and runtime crio
	I0410 21:28:59.266729   13537 out.go:97] Downloading Kubernetes v1.30.0-rc.1 preload ...
	I0410 21:28:59.266743   13537 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-rc.1-cri-o-overlay-amd64.tar.lz4 ...
	I0410 21:28:59.361788   13537 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.1/preloaded-images-k8s-v18-v1.30.0-rc.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:87f68ecc43ec0a2c6db951923ee9e281 -> /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I0410 21:29:10.164393   13537 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-rc.1-cri-o-overlay-amd64.tar.lz4 ...
	I0410 21:29:10.164507   13537 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18610-5679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.1-cri-o-overlay-amd64.tar.lz4 ...
	I0410 21:29:10.924223   13537 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.1 on crio
	I0410 21:29:10.924561   13537 profile.go:143] Saving config to /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/download-only-753930/config.json ...
	I0410 21:29:10.924590   13537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/download-only-753930/config.json: {Name:mkc35d51ea85e287a7e67ae59e8773c90c3218b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0410 21:29:10.924738   13537 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.1 and runtime crio
	I0410 21:29:10.924903   13537 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0-rc.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-rc.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18610-5679/.minikube/cache/linux/amd64/v1.30.0-rc.1/kubectl
	I0410 21:29:28.247329   13537 out.go:169] 
	W0410 21:29:28.248860   13537 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.30.0-rc.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-rc.1/bin/linux/amd64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.30.0-rc.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-rc.1/bin/linux/amd64/kubectl.sha256 Dst:/home/jenkins/minikube-integration/18610-5679/.minikube/cache/linux/amd64/v1.30.0-rc.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x47f2020 0x47f2020 0x47f2020 0x47f2020 0x47f2020 0x47f2020 0x47f2020] Decompressors:map[bz2:0xc0006c1ab0 gz:0xc0006c1ab8 tar:0xc0006c19c0 tar.bz2:0xc0006c1a10 tar.gz:0xc0006c1a20 tar.xz:0xc0006c1a30 tar.zst:0xc0006c1a50 tbz2:0xc0006c1a10 tgz:0xc0006c1a20 txz:0xc0006c1a30 tzst:0xc0006c1a50 xz:0xc0006c1ac0 zip:0xc0006c1ad0 zst:0xc0006c1ac8] Getters:map[file:0xc0026ae690 http:0xc0007b21e0 https:0xc0007b2230] Dir:false ProgressListener:<nil> I
nsecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.3:60796->151.101.193.55:443: read: connection reset by peer
	W0410 21:29:28.248871   13537 out_reason.go:110] 
	W0410 21:29:28.251330   13537 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0410 21:29:28.252641   13537 out.go:169] 
	
	
	* The control-plane node download-only-753930 host does not exist
	  To start a cluster, run: "minikube start -p download-only-753930"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-rc.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-rc.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-753930
--- PASS: TestDownloadOnly/v1.30.0-rc.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-689773 --alsologtostderr --binary-mirror http://127.0.0.1:43159 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-689773" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-689773
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (125.48s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-874231 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-874231 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m4.462239086s)
helpers_test.go:175: Cleaning up "offline-crio-874231" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-874231
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-874231: (1.018297229s)
--- PASS: TestOffline (125.48s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-577364
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-577364: exit status 85 (58.896567ms)

                                                
                                                
-- stdout --
	* Profile "addons-577364" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-577364"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-577364
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-577364: exit status 85 (58.572157ms)

                                                
                                                
-- stdout --
	* Profile "addons-577364" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-577364"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (149.95s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-577364 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-577364 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m29.948392372s)
--- PASS: TestAddons/Setup (149.95s)

                                                
                                    
x
+
TestAddons/parallel/Registry (83.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 26.766894ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-7rzv5" [d1bcce9f-b2cd-45a4-a0c8-cff2fc3184d2] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.010268459s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-lztl5" [c1d5454c-bd26-48f1-acdd-90e02e04ff42] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005738145s
addons_test.go:340: (dbg) Run:  kubectl --context addons-577364 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-577364 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-577364 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.000037117s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-577364 ip
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-577364 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (83.67s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.83s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-d2cvt" [9037d250-c0ce-440d-bd05-5169cf55b8e9] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0053811s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-577364
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-577364: (5.824062533s)
--- PASS: TestAddons/parallel/InspektorGadget (11.83s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.98s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 10.563508ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-75d6c48ddd-bd56m" [cb0f0dc5-19c6-4cec-a7f5-82bd11fc7537] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005578087s
addons_test.go:415: (dbg) Run:  kubectl --context addons-577364 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-577364 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.98s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.2s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.43147ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-mrf5g" [eda44e0a-72f7-41d3-a030-7ecf1007bab9] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.006331307s
addons_test.go:473: (dbg) Run:  kubectl --context addons-577364 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-577364 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.433033272s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-577364 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.20s)

                                                
                                    
x
+
TestAddons/parallel/CSI (110.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 35.306939ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-577364 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/04/10 21:32:19 [DEBUG] GET http://192.168.39.209:5000
2024/04/10 21:32:19 [ERR] GET http://192.168.39.209:5000 request failed: Get "http://192.168.39.209:5000": dial tcp 192.168.39.209:5000: connect: connection refused
2024/04/10 21:32:19 [DEBUG] GET http://192.168.39.209:5000: retrying in 1s (4 left)
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/04/10 21:32:20 [ERR] GET http://192.168.39.209:5000 request failed: Get "http://192.168.39.209:5000": dial tcp 192.168.39.209:5000: connect: connection refused
2024/04/10 21:32:20 [DEBUG] GET http://192.168.39.209:5000: retrying in 2s (3 left)
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/04/10 21:32:22 [ERR] GET http://192.168.39.209:5000 request failed: Get "http://192.168.39.209:5000": dial tcp 192.168.39.209:5000: connect: connection refused
2024/04/10 21:32:22 [DEBUG] GET http://192.168.39.209:5000: retrying in 4s (2 left)
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/04/10 21:32:26 [ERR] GET http://192.168.39.209:5000 request failed: Get "http://192.168.39.209:5000": dial tcp 192.168.39.209:5000: connect: connection refused
2024/04/10 21:32:26 [DEBUG] GET http://192.168.39.209:5000: retrying in 8s (1 left)
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/04/10 21:32:34 [ERR] GET http://192.168.39.209:5000 request failed: Get "http://192.168.39.209:5000": dial tcp 192.168.39.209:5000: connect: connection refused
2024/04/10 21:32:35 [DEBUG] GET http://192.168.39.209:5000
2024/04/10 21:32:35 [ERR] GET http://192.168.39.209:5000 request failed: Get "http://192.168.39.209:5000": dial tcp 192.168.39.209:5000: connect: connection refused
2024/04/10 21:32:35 [DEBUG] GET http://192.168.39.209:5000: retrying in 1s (4 left)
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/04/10 21:32:36 [ERR] GET http://192.168.39.209:5000 request failed: Get "http://192.168.39.209:5000": dial tcp 192.168.39.209:5000: connect: connection refused
2024/04/10 21:32:36 [DEBUG] GET http://192.168.39.209:5000: retrying in 2s (3 left)
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/04/10 21:32:38 [ERR] GET http://192.168.39.209:5000 request failed: Get "http://192.168.39.209:5000": dial tcp 192.168.39.209:5000: connect: connection refused
2024/04/10 21:32:38 [DEBUG] GET http://192.168.39.209:5000: retrying in 4s (2 left)
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/04/10 21:32:42 [ERR] GET http://192.168.39.209:5000 request failed: Get "http://192.168.39.209:5000": dial tcp 192.168.39.209:5000: connect: connection refused
2024/04/10 21:32:42 [DEBUG] GET http://192.168.39.209:5000: retrying in 8s (1 left)
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/04/10 21:32:50 [ERR] GET http://192.168.39.209:5000 request failed: Get "http://192.168.39.209:5000": dial tcp 192.168.39.209:5000: connect: connection refused
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/04/10 21:32:51 [DEBUG] GET http://192.168.39.209:5000
2024/04/10 21:32:51 [ERR] GET http://192.168.39.209:5000 request failed: Get "http://192.168.39.209:5000": dial tcp 192.168.39.209:5000: connect: connection refused
2024/04/10 21:32:51 [DEBUG] GET http://192.168.39.209:5000: retrying in 1s (4 left)
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/04/10 21:32:52 [ERR] GET http://192.168.39.209:5000 request failed: Get "http://192.168.39.209:5000": dial tcp 192.168.39.209:5000: connect: connection refused
2024/04/10 21:32:52 [DEBUG] GET http://192.168.39.209:5000: retrying in 2s (3 left)
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/04/10 21:32:54 [ERR] GET http://192.168.39.209:5000 request failed: Get "http://192.168.39.209:5000": dial tcp 192.168.39.209:5000: connect: connection refused
2024/04/10 21:32:54 [DEBUG] GET http://192.168.39.209:5000: retrying in 4s (2 left)
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-577364 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [9536cb5d-fef3-45aa-ac68-8d6f07777b6f] Pending
helpers_test.go:344: "task-pv-pod" [9536cb5d-fef3-45aa-ac68-8d6f07777b6f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [9536cb5d-fef3-45aa-ac68-8d6f07777b6f] Running
2024/04/10 21:33:06 [ERR] GET http://192.168.39.209:5000 request failed: Get "http://192.168.39.209:5000": dial tcp 192.168.39.209:5000: connect: connection refused
2024/04/10 21:33:07 [DEBUG] GET http://192.168.39.209:5000
2024/04/10 21:33:07 [ERR] GET http://192.168.39.209:5000 request failed: Get "http://192.168.39.209:5000": dial tcp 192.168.39.209:5000: connect: connection refused
2024/04/10 21:33:07 [DEBUG] GET http://192.168.39.209:5000: retrying in 1s (4 left)
2024/04/10 21:33:08 [ERR] GET http://192.168.39.209:5000 request failed: Get "http://192.168.39.209:5000": dial tcp 192.168.39.209:5000: connect: connection refused
2024/04/10 21:33:08 [DEBUG] GET http://192.168.39.209:5000: retrying in 2s (3 left)
2024/04/10 21:33:10 [ERR] GET http://192.168.39.209:5000 request failed: Get "http://192.168.39.209:5000": dial tcp 192.168.39.209:5000: connect: connection refused
2024/04/10 21:33:10 [DEBUG] GET http://192.168.39.209:5000: retrying in 4s (2 left)
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.004200254s
addons_test.go:584: (dbg) Run:  kubectl --context addons-577364 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-577364 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-577364 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-577364 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-577364 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-577364 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
2024/04/10 21:33:14 [ERR] GET http://192.168.39.209:5000 request failed: Get "http://192.168.39.209:5000": dial tcp 192.168.39.209:5000: connect: connection refused
2024/04/10 21:33:14 [DEBUG] GET http://192.168.39.209:5000: retrying in 8s (1 left)
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-577364 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [1ef1b4a7-59de-4d91-a922-02428c06efa2] Pending
helpers_test.go:344: "task-pv-pod-restore" [1ef1b4a7-59de-4d91-a922-02428c06efa2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [1ef1b4a7-59de-4d91-a922-02428c06efa2] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.006039831s
addons_test.go:626: (dbg) Run:  kubectl --context addons-577364 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-577364 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-577364 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-577364 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-577364 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.809762397s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-577364 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (110.51s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-577364 --alsologtostderr -v=1
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5b77dbd7c4-k9nwt" [03c2aa31-a311-421a-bb11-2797c4bb051f] Pending
helpers_test.go:344: "headlamp-5b77dbd7c4-k9nwt" [03c2aa31-a311-421a-bb11-2797c4bb051f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5b77dbd7c4-k9nwt" [03c2aa31-a311-421a-bb11-2797c4bb051f] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 17.004872133s
--- PASS: TestAddons/parallel/Headlamp (17.97s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.76s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-cs2ff" [96caaf74-c5d4-4e0e-b2e6-24d80c681387] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003766196s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-577364
--- PASS: TestAddons/parallel/CloudSpanner (6.76s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.44s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-577364 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-577364 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577364 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ddd58adf-2fa5-4230-8c3e-5fccb95b368e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ddd58adf-2fa5-4230-8c3e-5fccb95b368e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ddd58adf-2fa5-4230-8c3e-5fccb95b368e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.0040746s
addons_test.go:891: (dbg) Run:  kubectl --context addons-577364 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-577364 ssh "cat /opt/local-path-provisioner/pvc-d17dbfe7-521e-40fb-b0d3-e5165151a7dc_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-577364 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-577364 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-577364 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.44s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-s9dvf" [a5317961-998f-441b-aa29-8cd21367e96c] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005652868s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-577364
2024/04/10 21:32:58 [ERR] GET http://192.168.39.209:5000 request failed: Get "http://192.168.39.209:5000": dial tcp 192.168.39.209:5000: connect: connection refused
2024/04/10 21:32:58 [DEBUG] GET http://192.168.39.209:5000: retrying in 8s (1 left)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-sznp7" [136b30cb-878d-43ec-9fe7-77f52732f659] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004814447s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-577364 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-577364 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestCertOptions (55.33s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-849843 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-849843 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (54.051333979s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-849843 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-849843 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-849843 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-849843" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-849843
--- PASS: TestCertOptions (55.33s)

                                                
                                    
x
+
TestCertExpiration (627.8s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-464519 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-464519 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m23.191613092s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-464519 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-464519 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (6m3.602966998s)
helpers_test.go:175: Cleaning up "cert-expiration-464519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-464519
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-464519: (1.008486309s)
--- PASS: TestCertExpiration (627.80s)

                                                
                                    
x
+
TestForceSystemdFlag (59.52s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-738205 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-738205 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (58.464660088s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-738205 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-738205" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-738205
--- PASS: TestForceSystemdFlag (59.52s)

                                                
                                    
x
+
TestForceSystemdEnv (45.99s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-151945 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-151945 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (45.186166713s)
helpers_test.go:175: Cleaning up "force-systemd-env-151945" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-151945
--- PASS: TestForceSystemdEnv (45.99s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.09s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.09s)

                                                
                                    
x
+
TestErrorSpam/setup (46.31s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-086927 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-086927 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-086927 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-086927 --driver=kvm2  --container-runtime=crio: (46.312021171s)
--- PASS: TestErrorSpam/setup (46.31s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-086927 --log_dir /tmp/nospam-086927 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-086927 --log_dir /tmp/nospam-086927 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-086927 --log_dir /tmp/nospam-086927 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-086927 --log_dir /tmp/nospam-086927 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-086927 --log_dir /tmp/nospam-086927 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-086927 --log_dir /tmp/nospam-086927 status
--- PASS: TestErrorSpam/status (0.76s)

                                                
                                    
x
+
TestErrorSpam/pause (1.62s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-086927 --log_dir /tmp/nospam-086927 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-086927 --log_dir /tmp/nospam-086927 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-086927 --log_dir /tmp/nospam-086927 pause
--- PASS: TestErrorSpam/pause (1.62s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-086927 --log_dir /tmp/nospam-086927 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-086927 --log_dir /tmp/nospam-086927 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-086927 --log_dir /tmp/nospam-086927 unpause
--- PASS: TestErrorSpam/unpause (1.69s)

                                                
                                    
x
+
TestErrorSpam/stop (5.47s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-086927 --log_dir /tmp/nospam-086927 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-086927 --log_dir /tmp/nospam-086927 stop: (2.298554162s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-086927 --log_dir /tmp/nospam-086927 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-086927 --log_dir /tmp/nospam-086927 stop: (1.578958511s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-086927 --log_dir /tmp/nospam-086927 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-086927 --log_dir /tmp/nospam-086927 stop: (1.593270343s)
--- PASS: TestErrorSpam/stop (5.47s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18610-5679/.minikube/files/etc/test/nested/copy/13001/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (100.32s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-130509 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-130509 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m40.321417331s)
--- PASS: TestFunctional/serial/StartWithProxy (100.32s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.98s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-130509 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-130509 --alsologtostderr -v=8: (37.981826821s)
functional_test.go:659: soft start took 37.982722007s for "functional-130509" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.98s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-130509 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-130509 cache add registry.k8s.io/pause:3.1: (1.240163137s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-130509 cache add registry.k8s.io/pause:3.3: (1.314738913s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-130509 cache add registry.k8s.io/pause:latest: (1.200516547s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-130509 /tmp/TestFunctionalserialCacheCmdcacheadd_local2306321581/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 cache add minikube-local-cache-test:functional-130509
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-130509 cache add minikube-local-cache-test:functional-130509: (1.76162749s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 cache delete minikube-local-cache-test:functional-130509
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-130509
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-130509 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (225.548168ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-130509 cache reload: (1.010227239s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 kubectl -- --context functional-130509 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-130509 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.82s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-130509 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-130509 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.817379314s)
functional_test.go:757: restart took 33.817493169s for "functional-130509" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.82s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-130509 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.61s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-130509 logs: (1.605226487s)
--- PASS: TestFunctional/serial/LogsCmd (1.61s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 logs --file /tmp/TestFunctionalserialLogsFileCmd385389626/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-130509 logs --file /tmp/TestFunctionalserialLogsFileCmd385389626/001/logs.txt: (1.534734548s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.54s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.35s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-130509 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-130509
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-130509: exit status 115 (290.694374ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.252:31424 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-130509 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.35s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-130509 config get cpus: exit status 14 (68.652898ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-130509 config get cpus: exit status 14 (66.5131ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (18.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-130509 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-130509 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 22457: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (18.59s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-130509 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-130509 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (177.742233ms)

                                                
                                                
-- stdout --
	* [functional-130509] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18610
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0410 21:42:08.394188   22132 out.go:291] Setting OutFile to fd 1 ...
	I0410 21:42:08.394473   22132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 21:42:08.394484   22132 out.go:304] Setting ErrFile to fd 2...
	I0410 21:42:08.394490   22132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 21:42:08.394748   22132 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 21:42:08.395430   22132 out.go:298] Setting JSON to false
	I0410 21:42:08.396712   22132 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1471,"bootTime":1712783858,"procs":256,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0410 21:42:08.396801   22132 start.go:139] virtualization: kvm guest
	I0410 21:42:08.399462   22132 out.go:177] * [functional-130509] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0410 21:42:08.401528   22132 out.go:177]   - MINIKUBE_LOCATION=18610
	I0410 21:42:08.401491   22132 notify.go:220] Checking for updates...
	I0410 21:42:08.403128   22132 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0410 21:42:08.405578   22132 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 21:42:08.407487   22132 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 21:42:08.409150   22132 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0410 21:42:08.410627   22132 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0410 21:42:08.412444   22132 config.go:182] Loaded profile config "functional-130509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 21:42:08.412857   22132 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:42:08.412908   22132 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:42:08.428307   22132 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38329
	I0410 21:42:08.428943   22132 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:42:08.429676   22132 main.go:141] libmachine: Using API Version  1
	I0410 21:42:08.429699   22132 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:42:08.430026   22132 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:42:08.430216   22132 main.go:141] libmachine: (functional-130509) Calling .DriverName
	I0410 21:42:08.430463   22132 driver.go:392] Setting default libvirt URI to qemu:///system
	I0410 21:42:08.430855   22132 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:42:08.430894   22132 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:42:08.453288   22132 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40591
	I0410 21:42:08.453661   22132 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:42:08.454204   22132 main.go:141] libmachine: Using API Version  1
	I0410 21:42:08.454230   22132 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:42:08.454599   22132 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:42:08.454808   22132 main.go:141] libmachine: (functional-130509) Calling .DriverName
	I0410 21:42:08.491321   22132 out.go:177] * Using the kvm2 driver based on existing profile
	I0410 21:42:08.492966   22132 start.go:297] selected driver: kvm2
	I0410 21:42:08.492988   22132 start.go:901] validating driver "kvm2" against &{Name:functional-130509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:functional-130509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.252 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 21:42:08.493146   22132 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0410 21:42:08.495708   22132 out.go:177] 
	W0410 21:42:08.497133   22132 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0410 21:42:08.498667   22132 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-130509 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-130509 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-130509 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (171.376081ms)

                                                
                                                
-- stdout --
	* [functional-130509] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18610
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0410 21:42:08.226176   22092 out.go:291] Setting OutFile to fd 1 ...
	I0410 21:42:08.226326   22092 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 21:42:08.226339   22092 out.go:304] Setting ErrFile to fd 2...
	I0410 21:42:08.226346   22092 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 21:42:08.226806   22092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 21:42:08.227455   22092 out.go:298] Setting JSON to false
	I0410 21:42:08.228730   22092 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1471,"bootTime":1712783858,"procs":253,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0410 21:42:08.228828   22092 start.go:139] virtualization: kvm guest
	I0410 21:42:08.231670   22092 out.go:177] * [functional-130509] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	I0410 21:42:08.233932   22092 out.go:177]   - MINIKUBE_LOCATION=18610
	I0410 21:42:08.233897   22092 notify.go:220] Checking for updates...
	I0410 21:42:08.235694   22092 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0410 21:42:08.237421   22092 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 21:42:08.239222   22092 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 21:42:08.240866   22092 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0410 21:42:08.242360   22092 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0410 21:42:08.244346   22092 config.go:182] Loaded profile config "functional-130509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 21:42:08.244822   22092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:42:08.244880   22092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:42:08.264512   22092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40813
	I0410 21:42:08.264908   22092 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:42:08.265446   22092 main.go:141] libmachine: Using API Version  1
	I0410 21:42:08.265471   22092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:42:08.265806   22092 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:42:08.266034   22092 main.go:141] libmachine: (functional-130509) Calling .DriverName
	I0410 21:42:08.266371   22092 driver.go:392] Setting default libvirt URI to qemu:///system
	I0410 21:42:08.266776   22092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:42:08.266847   22092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:42:08.282654   22092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45847
	I0410 21:42:08.283164   22092 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:42:08.283756   22092 main.go:141] libmachine: Using API Version  1
	I0410 21:42:08.283793   22092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:42:08.284179   22092 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:42:08.284409   22092 main.go:141] libmachine: (functional-130509) Calling .DriverName
	I0410 21:42:08.318528   22092 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0410 21:42:08.320190   22092 start.go:297] selected driver: kvm2
	I0410 21:42:08.320208   22092 start.go:901] validating driver "kvm2" against &{Name:functional-130509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18610/minikube-v1.33.0-1712743565-18610-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712743643-18610@sha256:57f6b6f207b748ce717275fbcd6ae3dba156d24f7d9b85b4b7e51b63bacaf9dc Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:functional-130509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.252 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0410 21:42:08.320321   22092 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0410 21:42:08.322937   22092 out.go:177] 
	W0410 21:42:08.324791   22092 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0410 21:42:08.326398   22092 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-130509 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-130509 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-77xfw" [1bdd6abd-9a40-4504-a5b6-8cde54d0e918] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-77xfw" [1bdd6abd-9a40-4504-a5b6-8cde54d0e918] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.004236858s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.252:32766
functional_test.go:1671: http://192.168.39.252:32766: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-77xfw

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.252:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.252:32766
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.68s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (50.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [88424a1c-3126-4c37-89a9-76b4e31b3e5e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005306712s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-130509 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-130509 apply -f testdata/storage-provisioner/pvc.yaml
E0410 21:42:00.248413   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-130509 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-130509 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [994410d2-b41e-4f1b-be61-badc4aa3fcc4] Pending
E0410 21:42:00.889051   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [994410d2-b41e-4f1b-be61-badc4aa3fcc4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0410 21:42:02.169353   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [994410d2-b41e-4f1b-be61-badc4aa3fcc4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.007099996s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-130509 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-130509 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-130509 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [02ca7a98-ed74-4d46-8d11-19c1c003700f] Pending
helpers_test.go:344: "sp-pod" [02ca7a98-ed74-4d46-8d11-19c1c003700f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [02ca7a98-ed74-4d46-8d11-19c1c003700f] Running
E0410 21:42:40.572846   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.005534616s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-130509 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (50.78s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh -n functional-130509 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 cp functional-130509:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4274224566/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh -n functional-130509 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh -n functional-130509 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (35.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-130509 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-h89cz" [f13fb1c6-c5cb-4c8c-a294-a065af483d6b] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-h89cz" [f13fb1c6-c5cb-4c8c-a294-a065af483d6b] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 33.00461381s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-130509 exec mysql-859648c796-h89cz -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-130509 exec mysql-859648c796-h89cz -- mysql -ppassword -e "show databases;": exit status 1 (126.983373ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-130509 exec mysql-859648c796-h89cz -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-130509 exec mysql-859648c796-h89cz -- mysql -ppassword -e "show databases;": exit status 1 (138.464366ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-130509 exec mysql-859648c796-h89cz -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (35.43s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/13001/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh "sudo cat /etc/test/nested/copy/13001/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/13001.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh "sudo cat /etc/ssl/certs/13001.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/13001.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh "sudo cat /usr/share/ca-certificates/13001.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh "sudo cat /etc/ssl/certs/51391683.0"
E0410 21:42:09.851201   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
functional_test.go:1995: Checking for existence of /etc/ssl/certs/130012.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh "sudo cat /etc/ssl/certs/130012.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/130012.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh "sudo cat /usr/share/ca-certificates/130012.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-130509 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-130509 ssh "sudo systemctl is-active docker": exit status 1 (270.595243ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-130509 ssh "sudo systemctl is-active containerd": exit status 1 (242.38644ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-130509 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-130509 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-7phw8" [b125094b-31dd-4dcb-a005-6e91c4ba8f13] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-7phw8" [b125094b-31dd-4dcb-a005-6e91c4ba8f13] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.004362242s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "270.157308ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "70.731624ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "275.449311ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "57.331847ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-130509 /tmp/TestFunctionalparallelMountCmdany-port2814646870/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1712785316083625857" to /tmp/TestFunctionalparallelMountCmdany-port2814646870/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1712785316083625857" to /tmp/TestFunctionalparallelMountCmdany-port2814646870/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1712785316083625857" to /tmp/TestFunctionalparallelMountCmdany-port2814646870/001/test-1712785316083625857
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-130509 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (249.742999ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 10 21:41 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 10 21:41 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 10 21:41 test-1712785316083625857
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh cat /mount-9p/test-1712785316083625857
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-130509 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [4437f3c6-e1e8-44fd-861e-5b03492ea07e] Pending
helpers_test.go:344: "busybox-mount" [4437f3c6-e1e8-44fd-861e-5b03492ea07e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0410 21:41:59.609788   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
E0410 21:41:59.615710   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
E0410 21:41:59.625963   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
E0410 21:41:59.646276   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
E0410 21:41:59.686624   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
E0410 21:41:59.767017   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
E0410 21:41:59.927409   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [4437f3c6-e1e8-44fd-861e-5b03492ea07e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [4437f3c6-e1e8-44fd-861e-5b03492ea07e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004021776s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-130509 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-130509 /tmp/TestFunctionalparallelMountCmdany-port2814646870/001:/mount-9p --alsologtostderr -v=1] ...
E0410 21:42:04.730134   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.69s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-130509 /tmp/TestFunctionalparallelMountCmdspecific-port2463526370/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-130509 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (276.422121ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-130509 /tmp/TestFunctionalparallelMountCmdspecific-port2463526370/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-130509 /tmp/TestFunctionalparallelMountCmdspecific-port2463526370/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 service list -o json
functional_test.go:1490: Took "392.309805ms" to run "out/minikube-linux-amd64 -p functional-130509 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-130509 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2540711122/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-130509 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2540711122/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-130509 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2540711122/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-130509 ssh "findmnt -T" /mount1: exit status 1 (303.447279ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-130509 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-130509 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2540711122/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-130509 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2540711122/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-130509 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2540711122/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.252:30760
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.252:30760
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-130509 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-130509
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-130509
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-130509 image ls --format short --alsologtostderr:
I0410 21:42:38.891501   23138 out.go:291] Setting OutFile to fd 1 ...
I0410 21:42:38.891747   23138 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0410 21:42:38.891757   23138 out.go:304] Setting ErrFile to fd 2...
I0410 21:42:38.891761   23138 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0410 21:42:38.891947   23138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
I0410 21:42:38.892533   23138 config.go:182] Loaded profile config "functional-130509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0410 21:42:38.892650   23138 config.go:182] Loaded profile config "functional-130509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0410 21:42:38.893065   23138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0410 21:42:38.893156   23138 main.go:141] libmachine: Launching plugin server for driver kvm2
I0410 21:42:38.907902   23138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34381
I0410 21:42:38.908319   23138 main.go:141] libmachine: () Calling .GetVersion
I0410 21:42:38.908989   23138 main.go:141] libmachine: Using API Version  1
I0410 21:42:38.909014   23138 main.go:141] libmachine: () Calling .SetConfigRaw
I0410 21:42:38.909378   23138 main.go:141] libmachine: () Calling .GetMachineName
I0410 21:42:38.909582   23138 main.go:141] libmachine: (functional-130509) Calling .GetState
I0410 21:42:38.911378   23138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0410 21:42:38.911413   23138 main.go:141] libmachine: Launching plugin server for driver kvm2
I0410 21:42:38.926083   23138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41625
I0410 21:42:38.926598   23138 main.go:141] libmachine: () Calling .GetVersion
I0410 21:42:38.927220   23138 main.go:141] libmachine: Using API Version  1
I0410 21:42:38.927279   23138 main.go:141] libmachine: () Calling .SetConfigRaw
I0410 21:42:38.927601   23138 main.go:141] libmachine: () Calling .GetMachineName
I0410 21:42:38.927884   23138 main.go:141] libmachine: (functional-130509) Calling .DriverName
I0410 21:42:38.928183   23138 ssh_runner.go:195] Run: systemctl --version
I0410 21:42:38.928206   23138 main.go:141] libmachine: (functional-130509) Calling .GetSSHHostname
I0410 21:42:38.931111   23138 main.go:141] libmachine: (functional-130509) DBG | domain functional-130509 has defined MAC address 52:54:00:b6:24:aa in network mk-functional-130509
I0410 21:42:38.931616   23138 main.go:141] libmachine: (functional-130509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:24:aa", ip: ""} in network mk-functional-130509: {Iface:virbr1 ExpiryTime:2024-04-10 22:39:00 +0000 UTC Type:0 Mac:52:54:00:b6:24:aa Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:functional-130509 Clientid:01:52:54:00:b6:24:aa}
I0410 21:42:38.931734   23138 main.go:141] libmachine: (functional-130509) DBG | domain functional-130509 has defined IP address 192.168.39.252 and MAC address 52:54:00:b6:24:aa in network mk-functional-130509
I0410 21:42:38.931962   23138 main.go:141] libmachine: (functional-130509) Calling .GetSSHPort
I0410 21:42:38.932113   23138 main.go:141] libmachine: (functional-130509) Calling .GetSSHKeyPath
I0410 21:42:38.932251   23138 main.go:141] libmachine: (functional-130509) Calling .GetSSHUsername
I0410 21:42:38.932375   23138 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/functional-130509/id_rsa Username:docker}
I0410 21:42:39.008105   23138 ssh_runner.go:195] Run: sudo crictl images --output json
I0410 21:42:39.065647   23138 main.go:141] libmachine: Making call to close driver server
I0410 21:42:39.065662   23138 main.go:141] libmachine: (functional-130509) Calling .Close
I0410 21:42:39.065896   23138 main.go:141] libmachine: Successfully made call to close driver server
I0410 21:42:39.065918   23138 main.go:141] libmachine: Making call to close connection to plugin binary
I0410 21:42:39.065917   23138 main.go:141] libmachine: (functional-130509) DBG | Closing plugin on server side
I0410 21:42:39.065927   23138 main.go:141] libmachine: Making call to close driver server
I0410 21:42:39.065935   23138 main.go:141] libmachine: (functional-130509) Calling .Close
I0410 21:42:39.066178   23138 main.go:141] libmachine: (functional-130509) DBG | Closing plugin on server side
I0410 21:42:39.066263   23138 main.go:141] libmachine: Successfully made call to close driver server
I0410 21:42:39.066309   23138 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-130509 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4950bb10b3f87 | 65.3MB |
| localhost/minikube-local-cache-test     | functional-130509  | 58c36adda33b2 | 3.33kB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-scheduler          | v1.29.3            | 8c390d98f50c0 | 60.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | c613f16b66424 | 191MB  |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-proxy              | v1.29.3            | a1d263b5dc5b0 | 83.6MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| gcr.io/google-containers/addon-resizer  | functional-130509  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/kube-apiserver          | v1.29.3            | 39f995c9f1996 | 129MB  |
| registry.k8s.io/kube-controller-manager | v1.29.3            | 6052a25da3f97 | 123MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-130509 image ls --format table --alsologtostderr:
I0410 21:42:39.124601   23206 out.go:291] Setting OutFile to fd 1 ...
I0410 21:42:39.125119   23206 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0410 21:42:39.125167   23206 out.go:304] Setting ErrFile to fd 2...
I0410 21:42:39.125184   23206 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0410 21:42:39.125660   23206 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
I0410 21:42:39.126738   23206 config.go:182] Loaded profile config "functional-130509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0410 21:42:39.126838   23206 config.go:182] Loaded profile config "functional-130509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0410 21:42:39.127193   23206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0410 21:42:39.127246   23206 main.go:141] libmachine: Launching plugin server for driver kvm2
I0410 21:42:39.142001   23206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45253
I0410 21:42:39.142444   23206 main.go:141] libmachine: () Calling .GetVersion
I0410 21:42:39.142964   23206 main.go:141] libmachine: Using API Version  1
I0410 21:42:39.142988   23206 main.go:141] libmachine: () Calling .SetConfigRaw
I0410 21:42:39.143360   23206 main.go:141] libmachine: () Calling .GetMachineName
I0410 21:42:39.143615   23206 main.go:141] libmachine: (functional-130509) Calling .GetState
I0410 21:42:39.145360   23206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0410 21:42:39.145403   23206 main.go:141] libmachine: Launching plugin server for driver kvm2
I0410 21:42:39.159806   23206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38395
I0410 21:42:39.160290   23206 main.go:141] libmachine: () Calling .GetVersion
I0410 21:42:39.160761   23206 main.go:141] libmachine: Using API Version  1
I0410 21:42:39.160787   23206 main.go:141] libmachine: () Calling .SetConfigRaw
I0410 21:42:39.161170   23206 main.go:141] libmachine: () Calling .GetMachineName
I0410 21:42:39.161347   23206 main.go:141] libmachine: (functional-130509) Calling .DriverName
I0410 21:42:39.161561   23206 ssh_runner.go:195] Run: systemctl --version
I0410 21:42:39.161587   23206 main.go:141] libmachine: (functional-130509) Calling .GetSSHHostname
I0410 21:42:39.163942   23206 main.go:141] libmachine: (functional-130509) DBG | domain functional-130509 has defined MAC address 52:54:00:b6:24:aa in network mk-functional-130509
I0410 21:42:39.164321   23206 main.go:141] libmachine: (functional-130509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:24:aa", ip: ""} in network mk-functional-130509: {Iface:virbr1 ExpiryTime:2024-04-10 22:39:00 +0000 UTC Type:0 Mac:52:54:00:b6:24:aa Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:functional-130509 Clientid:01:52:54:00:b6:24:aa}
I0410 21:42:39.164354   23206 main.go:141] libmachine: (functional-130509) DBG | domain functional-130509 has defined IP address 192.168.39.252 and MAC address 52:54:00:b6:24:aa in network mk-functional-130509
I0410 21:42:39.164452   23206 main.go:141] libmachine: (functional-130509) Calling .GetSSHPort
I0410 21:42:39.164631   23206 main.go:141] libmachine: (functional-130509) Calling .GetSSHKeyPath
I0410 21:42:39.164781   23206 main.go:141] libmachine: (functional-130509) Calling .GetSSHUsername
I0410 21:42:39.164912   23206 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/functional-130509/id_rsa Username:docker}
I0410 21:42:39.239287   23206 ssh_runner.go:195] Run: sudo crictl images --output json
I0410 21:42:39.284629   23206 main.go:141] libmachine: Making call to close driver server
I0410 21:42:39.284644   23206 main.go:141] libmachine: (functional-130509) Calling .Close
I0410 21:42:39.284928   23206 main.go:141] libmachine: Successfully made call to close driver server
I0410 21:42:39.284958   23206 main.go:141] libmachine: Making call to close connection to plugin binary
I0410 21:42:39.284961   23206 main.go:141] libmachine: (functional-130509) DBG | Closing plugin on server side
I0410 21:42:39.284973   23206 main.go:141] libmachine: Making call to close driver server
I0410 21:42:39.284981   23206 main.go:141] libmachine: (functional-130509) Calling .Close
I0410 21:42:39.285216   23206 main.go:141] libmachine: Successfully made call to close driver server
I0410 21:42:39.285230   23206 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-130509 image ls --format json --alsologtostderr:
[{"id":"a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392","repoDigests":["registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d","registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"],"repoTags":["registry.k8s.io/kube-proxy:v1.29.3"],"size":"83634073"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"c613f16b664244b150d1c3644cbc387ec1fe8376377f9419992280eb4a82ff3b","repoDigests":["docker.io/library/nginx@sha256:5be228548c224e43da786fc22a8edf6caec832e8ffd94ab14cb654e6880a1bb8","docker.io/library/nginx@sha256:cd64407576751d9b9ba4924f758d3d39fe76a6e142c32169625b60934c95f057"],"repoTags":["docker.io/library/nginx:latest"],"size":"190874053"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbd
a1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533","repoDigests":["registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322","registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.29.3"],"size":"128508878"},{"id":"6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903
c4bf14b3","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606","registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.29.3"],"size":"123142962"},{"id":"8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a","registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"],"repoTags":["registry.k8s.io/kube-scheduler:v1.29.3"],"size":"60724018"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06"
,"repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"65291810"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"58c36adda33b2e2a960396d4d5691502e464165c7214dc9c24e3bcfb1a148409","repoDigests":["localhost/minikube-local-cache-test@sha256
:7c0719f1666fff8b7890d8838c9191c23778b6c241a59e69f9b2a92c0d4f48c8"],"repoTags":["localhost/minikube-local-cache-test:functional-130509"],"size":"3330"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","regist
ry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069a
df654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-130509"],"size":"34114467"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-130509 image ls --format json --alsologtostderr:
I0410 21:42:39.121818   23200 out.go:291] Setting OutFile to fd 1 ...
I0410 21:42:39.121938   23200 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0410 21:42:39.121949   23200 out.go:304] Setting ErrFile to fd 2...
I0410 21:42:39.121956   23200 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0410 21:42:39.122138   23200 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
I0410 21:42:39.122702   23200 config.go:182] Loaded profile config "functional-130509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0410 21:42:39.122813   23200 config.go:182] Loaded profile config "functional-130509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0410 21:42:39.123202   23200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0410 21:42:39.123264   23200 main.go:141] libmachine: Launching plugin server for driver kvm2
I0410 21:42:39.137786   23200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45905
I0410 21:42:39.138266   23200 main.go:141] libmachine: () Calling .GetVersion
I0410 21:42:39.138813   23200 main.go:141] libmachine: Using API Version  1
I0410 21:42:39.138840   23200 main.go:141] libmachine: () Calling .SetConfigRaw
I0410 21:42:39.139226   23200 main.go:141] libmachine: () Calling .GetMachineName
I0410 21:42:39.139439   23200 main.go:141] libmachine: (functional-130509) Calling .GetState
I0410 21:42:39.141420   23200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0410 21:42:39.141459   23200 main.go:141] libmachine: Launching plugin server for driver kvm2
I0410 21:42:39.155755   23200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38245
I0410 21:42:39.156165   23200 main.go:141] libmachine: () Calling .GetVersion
I0410 21:42:39.156712   23200 main.go:141] libmachine: Using API Version  1
I0410 21:42:39.156736   23200 main.go:141] libmachine: () Calling .SetConfigRaw
I0410 21:42:39.157044   23200 main.go:141] libmachine: () Calling .GetMachineName
I0410 21:42:39.157219   23200 main.go:141] libmachine: (functional-130509) Calling .DriverName
I0410 21:42:39.157420   23200 ssh_runner.go:195] Run: systemctl --version
I0410 21:42:39.157450   23200 main.go:141] libmachine: (functional-130509) Calling .GetSSHHostname
I0410 21:42:39.160367   23200 main.go:141] libmachine: (functional-130509) DBG | domain functional-130509 has defined MAC address 52:54:00:b6:24:aa in network mk-functional-130509
I0410 21:42:39.160806   23200 main.go:141] libmachine: (functional-130509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:24:aa", ip: ""} in network mk-functional-130509: {Iface:virbr1 ExpiryTime:2024-04-10 22:39:00 +0000 UTC Type:0 Mac:52:54:00:b6:24:aa Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:functional-130509 Clientid:01:52:54:00:b6:24:aa}
I0410 21:42:39.160838   23200 main.go:141] libmachine: (functional-130509) DBG | domain functional-130509 has defined IP address 192.168.39.252 and MAC address 52:54:00:b6:24:aa in network mk-functional-130509
I0410 21:42:39.160964   23200 main.go:141] libmachine: (functional-130509) Calling .GetSSHPort
I0410 21:42:39.161179   23200 main.go:141] libmachine: (functional-130509) Calling .GetSSHKeyPath
I0410 21:42:39.161407   23200 main.go:141] libmachine: (functional-130509) Calling .GetSSHUsername
I0410 21:42:39.161571   23200 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/functional-130509/id_rsa Username:docker}
I0410 21:42:39.240931   23200 ssh_runner.go:195] Run: sudo crictl images --output json
I0410 21:42:39.308873   23200 main.go:141] libmachine: Making call to close driver server
I0410 21:42:39.308898   23200 main.go:141] libmachine: (functional-130509) Calling .Close
I0410 21:42:39.309171   23200 main.go:141] libmachine: Successfully made call to close driver server
I0410 21:42:39.309185   23200 main.go:141] libmachine: Making call to close connection to plugin binary
I0410 21:42:39.309188   23200 main.go:141] libmachine: (functional-130509) DBG | Closing plugin on server side
I0410 21:42:39.309197   23200 main.go:141] libmachine: Making call to close driver server
I0410 21:42:39.309206   23200 main.go:141] libmachine: (functional-130509) Calling .Close
I0410 21:42:39.311397   23200 main.go:141] libmachine: Successfully made call to close driver server
I0410 21:42:39.311403   23200 main.go:141] libmachine: (functional-130509) DBG | Closing plugin on server side
I0410 21:42:39.311414   23200 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-130509 image ls --format yaml --alsologtostderr:
- id: 6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606
- registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104
repoTags:
- registry.k8s.io/kube-controller-manager:v1.29.3
size: "123142962"
- id: a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392
repoDigests:
- registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d
- registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863
repoTags:
- registry.k8s.io/kube-proxy:v1.29.3
size: "83634073"
- id: 8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a
- registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88
repoTags:
- registry.k8s.io/kube-scheduler:v1.29.3
size: "60724018"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-130509
size: "34114467"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 58c36adda33b2e2a960396d4d5691502e464165c7214dc9c24e3bcfb1a148409
repoDigests:
- localhost/minikube-local-cache-test@sha256:7c0719f1666fff8b7890d8838c9191c23778b6c241a59e69f9b2a92c0d4f48c8
repoTags:
- localhost/minikube-local-cache-test:functional-130509
size: "3330"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c613f16b664244b150d1c3644cbc387ec1fe8376377f9419992280eb4a82ff3b
repoDigests:
- docker.io/library/nginx@sha256:5be228548c224e43da786fc22a8edf6caec832e8ffd94ab14cb654e6880a1bb8
- docker.io/library/nginx@sha256:cd64407576751d9b9ba4924f758d3d39fe76a6e142c32169625b60934c95f057
repoTags:
- docker.io/library/nginx:latest
size: "190874053"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322
- registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c
repoTags:
- registry.k8s.io/kube-apiserver:v1.29.3
size: "128508878"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "65291810"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-130509 image ls --format yaml --alsologtostderr:
I0410 21:42:38.888007   23139 out.go:291] Setting OutFile to fd 1 ...
I0410 21:42:38.888136   23139 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0410 21:42:38.888147   23139 out.go:304] Setting ErrFile to fd 2...
I0410 21:42:38.888153   23139 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0410 21:42:38.888350   23139 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
I0410 21:42:38.888945   23139 config.go:182] Loaded profile config "functional-130509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0410 21:42:38.889061   23139 config.go:182] Loaded profile config "functional-130509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0410 21:42:38.889483   23139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0410 21:42:38.889534   23139 main.go:141] libmachine: Launching plugin server for driver kvm2
I0410 21:42:38.904102   23139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43897
I0410 21:42:38.904615   23139 main.go:141] libmachine: () Calling .GetVersion
I0410 21:42:38.905192   23139 main.go:141] libmachine: Using API Version  1
I0410 21:42:38.905216   23139 main.go:141] libmachine: () Calling .SetConfigRaw
I0410 21:42:38.905574   23139 main.go:141] libmachine: () Calling .GetMachineName
I0410 21:42:38.905757   23139 main.go:141] libmachine: (functional-130509) Calling .GetState
I0410 21:42:38.907639   23139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0410 21:42:38.907674   23139 main.go:141] libmachine: Launching plugin server for driver kvm2
I0410 21:42:38.922676   23139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39015
I0410 21:42:38.923086   23139 main.go:141] libmachine: () Calling .GetVersion
I0410 21:42:38.923565   23139 main.go:141] libmachine: Using API Version  1
I0410 21:42:38.923589   23139 main.go:141] libmachine: () Calling .SetConfigRaw
I0410 21:42:38.923890   23139 main.go:141] libmachine: () Calling .GetMachineName
I0410 21:42:38.924274   23139 main.go:141] libmachine: (functional-130509) Calling .DriverName
I0410 21:42:38.924525   23139 ssh_runner.go:195] Run: systemctl --version
I0410 21:42:38.924560   23139 main.go:141] libmachine: (functional-130509) Calling .GetSSHHostname
I0410 21:42:38.927562   23139 main.go:141] libmachine: (functional-130509) DBG | domain functional-130509 has defined MAC address 52:54:00:b6:24:aa in network mk-functional-130509
I0410 21:42:38.928084   23139 main.go:141] libmachine: (functional-130509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:24:aa", ip: ""} in network mk-functional-130509: {Iface:virbr1 ExpiryTime:2024-04-10 22:39:00 +0000 UTC Type:0 Mac:52:54:00:b6:24:aa Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:functional-130509 Clientid:01:52:54:00:b6:24:aa}
I0410 21:42:38.928117   23139 main.go:141] libmachine: (functional-130509) DBG | domain functional-130509 has defined IP address 192.168.39.252 and MAC address 52:54:00:b6:24:aa in network mk-functional-130509
I0410 21:42:38.928345   23139 main.go:141] libmachine: (functional-130509) Calling .GetSSHPort
I0410 21:42:38.928558   23139 main.go:141] libmachine: (functional-130509) Calling .GetSSHKeyPath
I0410 21:42:38.928705   23139 main.go:141] libmachine: (functional-130509) Calling .GetSSHUsername
I0410 21:42:38.928871   23139 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/functional-130509/id_rsa Username:docker}
I0410 21:42:39.007401   23139 ssh_runner.go:195] Run: sudo crictl images --output json
I0410 21:42:39.060842   23139 main.go:141] libmachine: Making call to close driver server
I0410 21:42:39.060869   23139 main.go:141] libmachine: (functional-130509) Calling .Close
I0410 21:42:39.061271   23139 main.go:141] libmachine: Successfully made call to close driver server
I0410 21:42:39.061291   23139 main.go:141] libmachine: Making call to close connection to plugin binary
I0410 21:42:39.061313   23139 main.go:141] libmachine: Making call to close driver server
I0410 21:42:39.061291   23139 main.go:141] libmachine: (functional-130509) DBG | Closing plugin on server side
I0410 21:42:39.061321   23139 main.go:141] libmachine: (functional-130509) Calling .Close
I0410 21:42:39.061681   23139 main.go:141] libmachine: (functional-130509) DBG | Closing plugin on server side
I0410 21:42:39.061688   23139 main.go:141] libmachine: Successfully made call to close driver server
I0410 21:42:39.061711   23139 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-130509 ssh pgrep buildkitd: exit status 1 (200.579313ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 image build -t localhost/my-image:functional-130509 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-130509 image build -t localhost/my-image:functional-130509 testdata/build --alsologtostderr: (3.148743328s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-130509 image build -t localhost/my-image:functional-130509 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 0137a599e50
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-130509
--> 789df9e15d0
Successfully tagged localhost/my-image:functional-130509
789df9e15d069df663a791eb5863fd8b3a5e2b6cb370b445ab32f83a0a192032
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-130509 image build -t localhost/my-image:functional-130509 testdata/build --alsologtostderr:
I0410 21:42:39.543546   23277 out.go:291] Setting OutFile to fd 1 ...
I0410 21:42:39.543689   23277 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0410 21:42:39.543699   23277 out.go:304] Setting ErrFile to fd 2...
I0410 21:42:39.543704   23277 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0410 21:42:39.543882   23277 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
I0410 21:42:39.544663   23277 config.go:182] Loaded profile config "functional-130509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0410 21:42:39.545232   23277 config.go:182] Loaded profile config "functional-130509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0410 21:42:39.545679   23277 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0410 21:42:39.545731   23277 main.go:141] libmachine: Launching plugin server for driver kvm2
I0410 21:42:39.560102   23277 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46345
I0410 21:42:39.560539   23277 main.go:141] libmachine: () Calling .GetVersion
I0410 21:42:39.561065   23277 main.go:141] libmachine: Using API Version  1
I0410 21:42:39.561086   23277 main.go:141] libmachine: () Calling .SetConfigRaw
I0410 21:42:39.561483   23277 main.go:141] libmachine: () Calling .GetMachineName
I0410 21:42:39.561720   23277 main.go:141] libmachine: (functional-130509) Calling .GetState
I0410 21:42:39.563851   23277 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0410 21:42:39.563909   23277 main.go:141] libmachine: Launching plugin server for driver kvm2
I0410 21:42:39.579685   23277 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32935
I0410 21:42:39.580180   23277 main.go:141] libmachine: () Calling .GetVersion
I0410 21:42:39.580677   23277 main.go:141] libmachine: Using API Version  1
I0410 21:42:39.580699   23277 main.go:141] libmachine: () Calling .SetConfigRaw
I0410 21:42:39.581061   23277 main.go:141] libmachine: () Calling .GetMachineName
I0410 21:42:39.581251   23277 main.go:141] libmachine: (functional-130509) Calling .DriverName
I0410 21:42:39.581433   23277 ssh_runner.go:195] Run: systemctl --version
I0410 21:42:39.581460   23277 main.go:141] libmachine: (functional-130509) Calling .GetSSHHostname
I0410 21:42:39.584071   23277 main.go:141] libmachine: (functional-130509) DBG | domain functional-130509 has defined MAC address 52:54:00:b6:24:aa in network mk-functional-130509
I0410 21:42:39.584452   23277 main.go:141] libmachine: (functional-130509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:24:aa", ip: ""} in network mk-functional-130509: {Iface:virbr1 ExpiryTime:2024-04-10 22:39:00 +0000 UTC Type:0 Mac:52:54:00:b6:24:aa Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:functional-130509 Clientid:01:52:54:00:b6:24:aa}
I0410 21:42:39.584479   23277 main.go:141] libmachine: (functional-130509) DBG | domain functional-130509 has defined IP address 192.168.39.252 and MAC address 52:54:00:b6:24:aa in network mk-functional-130509
I0410 21:42:39.584579   23277 main.go:141] libmachine: (functional-130509) Calling .GetSSHPort
I0410 21:42:39.584747   23277 main.go:141] libmachine: (functional-130509) Calling .GetSSHKeyPath
I0410 21:42:39.584878   23277 main.go:141] libmachine: (functional-130509) Calling .GetSSHUsername
I0410 21:42:39.585021   23277 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/functional-130509/id_rsa Username:docker}
I0410 21:42:39.663261   23277 build_images.go:161] Building image from path: /tmp/build.4071705132.tar
I0410 21:42:39.663314   23277 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0410 21:42:39.674723   23277 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4071705132.tar
I0410 21:42:39.679337   23277 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4071705132.tar: stat -c "%s %y" /var/lib/minikube/build/build.4071705132.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4071705132.tar': No such file or directory
I0410 21:42:39.679385   23277 ssh_runner.go:362] scp /tmp/build.4071705132.tar --> /var/lib/minikube/build/build.4071705132.tar (3072 bytes)
I0410 21:42:39.709156   23277 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4071705132
I0410 21:42:39.720350   23277 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4071705132 -xf /var/lib/minikube/build/build.4071705132.tar
I0410 21:42:39.731188   23277 crio.go:315] Building image: /var/lib/minikube/build/build.4071705132
I0410 21:42:39.731255   23277 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-130509 /var/lib/minikube/build/build.4071705132 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0410 21:42:42.605851   23277 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-130509 /var/lib/minikube/build/build.4071705132 --cgroup-manager=cgroupfs: (2.874576498s)
I0410 21:42:42.605905   23277 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4071705132
I0410 21:42:42.618132   23277 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4071705132.tar
I0410 21:42:42.632122   23277 build_images.go:217] Built localhost/my-image:functional-130509 from /tmp/build.4071705132.tar
I0410 21:42:42.632158   23277 build_images.go:133] succeeded building to: functional-130509
I0410 21:42:42.632164   23277 build_images.go:134] failed building to: 
I0410 21:42:42.632189   23277 main.go:141] libmachine: Making call to close driver server
I0410 21:42:42.632197   23277 main.go:141] libmachine: (functional-130509) Calling .Close
I0410 21:42:42.632452   23277 main.go:141] libmachine: Successfully made call to close driver server
I0410 21:42:42.632471   23277 main.go:141] libmachine: Making call to close connection to plugin binary
I0410 21:42:42.632495   23277 main.go:141] libmachine: Making call to close driver server
I0410 21:42:42.632503   23277 main.go:141] libmachine: (functional-130509) Calling .Close
I0410 21:42:42.632747   23277 main.go:141] libmachine: Successfully made call to close driver server
I0410 21:42:42.632758   23277 main.go:141] libmachine: (functional-130509) DBG | Closing plugin on server side
I0410 21:42:42.632761   23277 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.078136032s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-130509
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 image load --daemon gcr.io/google-containers/addon-resizer:functional-130509 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-130509 image load --daemon gcr.io/google-containers/addon-resizer:functional-130509 --alsologtostderr: (4.607937724s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.87s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 image load --daemon gcr.io/google-containers/addon-resizer:functional-130509 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-130509 image load --daemon gcr.io/google-containers/addon-resizer:functional-130509 --alsologtostderr: (2.972968077s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
E0410 21:42:20.091995   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.939689216s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-130509
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 image load --daemon gcr.io/google-containers/addon-resizer:functional-130509 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-130509 image load --daemon gcr.io/google-containers/addon-resizer:functional-130509 --alsologtostderr: (5.294490103s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 image ls
2024/04/10 21:42:26 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 image save gcr.io/google-containers/addon-resizer:functional-130509 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-130509 image save gcr.io/google-containers/addon-resizer:functional-130509 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (4.477280353s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 image rm gcr.io/google-containers/addon-resizer:functional-130509 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-130509 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (4.851620173s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-130509
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-130509 image save --daemon gcr.io/google-containers/addon-resizer:functional-130509 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-130509 image save --daemon gcr.io/google-containers/addon-resizer:functional-130509 --alsologtostderr: (1.618087049s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-130509
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.65s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-130509
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-130509
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-130509
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (265.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-150873 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0410 21:43:21.533520   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
E0410 21:44:43.453822   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
E0410 21:46:54.112224   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
E0410 21:46:54.117588   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
E0410 21:46:54.127907   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
E0410 21:46:54.148224   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
E0410 21:46:54.188845   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
E0410 21:46:54.269237   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
E0410 21:46:54.429621   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
E0410 21:46:54.750793   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
E0410 21:46:55.391812   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
E0410 21:46:56.671964   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
E0410 21:46:59.232895   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
E0410 21:46:59.610515   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
E0410 21:47:04.353965   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-150873 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m24.769126799s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (265.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150873 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150873 -- rollout status deployment/busybox
E0410 21:47:14.594497   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-150873 -- rollout status deployment/busybox: (4.424612358s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150873 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150873 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150873 -- exec busybox-7fdf7869d9-c58s7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150873 -- exec busybox-7fdf7869d9-npbvn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150873 -- exec busybox-7fdf7869d9-v9dkg -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150873 -- exec busybox-7fdf7869d9-c58s7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150873 -- exec busybox-7fdf7869d9-npbvn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150873 -- exec busybox-7fdf7869d9-v9dkg -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150873 -- exec busybox-7fdf7869d9-c58s7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150873 -- exec busybox-7fdf7869d9-npbvn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150873 -- exec busybox-7fdf7869d9-v9dkg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150873 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150873 -- exec busybox-7fdf7869d9-c58s7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150873 -- exec busybox-7fdf7869d9-c58s7 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150873 -- exec busybox-7fdf7869d9-npbvn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150873 -- exec busybox-7fdf7869d9-npbvn -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150873 -- exec busybox-7fdf7869d9-v9dkg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150873 -- exec busybox-7fdf7869d9-v9dkg -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (46.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-150873 -v=7 --alsologtostderr
E0410 21:47:27.294283   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
E0410 21:47:35.075113   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-150873 -v=7 --alsologtostderr: (46.084735964s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (46.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-150873 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 cp testdata/cp-test.txt ha-150873:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 ssh -n ha-150873 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 cp ha-150873:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile947152864/001/cp-test_ha-150873.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 ssh -n ha-150873 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 cp ha-150873:/home/docker/cp-test.txt ha-150873-m02:/home/docker/cp-test_ha-150873_ha-150873-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 ssh -n ha-150873 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 ssh -n ha-150873-m02 "sudo cat /home/docker/cp-test_ha-150873_ha-150873-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 cp ha-150873:/home/docker/cp-test.txt ha-150873-m03:/home/docker/cp-test_ha-150873_ha-150873-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 ssh -n ha-150873 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 ssh -n ha-150873-m03 "sudo cat /home/docker/cp-test_ha-150873_ha-150873-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 cp ha-150873:/home/docker/cp-test.txt ha-150873-m04:/home/docker/cp-test_ha-150873_ha-150873-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 ssh -n ha-150873 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 ssh -n ha-150873-m04 "sudo cat /home/docker/cp-test_ha-150873_ha-150873-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 cp testdata/cp-test.txt ha-150873-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 ssh -n ha-150873-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 cp ha-150873-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile947152864/001/cp-test_ha-150873-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 ssh -n ha-150873-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 cp ha-150873-m02:/home/docker/cp-test.txt ha-150873:/home/docker/cp-test_ha-150873-m02_ha-150873.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 ssh -n ha-150873-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 ssh -n ha-150873 "sudo cat /home/docker/cp-test_ha-150873-m02_ha-150873.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 cp ha-150873-m02:/home/docker/cp-test.txt ha-150873-m03:/home/docker/cp-test_ha-150873-m02_ha-150873-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 ssh -n ha-150873-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 ssh -n ha-150873-m03 "sudo cat /home/docker/cp-test_ha-150873-m02_ha-150873-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 cp ha-150873-m02:/home/docker/cp-test.txt ha-150873-m04:/home/docker/cp-test_ha-150873-m02_ha-150873-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 ssh -n ha-150873-m02 "sudo cat /home/docker/cp-test.txt"
E0410 21:48:16.036173   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 ssh -n ha-150873-m04 "sudo cat /home/docker/cp-test_ha-150873-m02_ha-150873-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 cp testdata/cp-test.txt ha-150873-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 ssh -n ha-150873-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 cp ha-150873-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile947152864/001/cp-test_ha-150873-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 ssh -n ha-150873-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 cp ha-150873-m03:/home/docker/cp-test.txt ha-150873:/home/docker/cp-test_ha-150873-m03_ha-150873.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 ssh -n ha-150873-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 ssh -n ha-150873 "sudo cat /home/docker/cp-test_ha-150873-m03_ha-150873.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 cp ha-150873-m03:/home/docker/cp-test.txt ha-150873-m02:/home/docker/cp-test_ha-150873-m03_ha-150873-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 ssh -n ha-150873-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 ssh -n ha-150873-m02 "sudo cat /home/docker/cp-test_ha-150873-m03_ha-150873-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 cp ha-150873-m03:/home/docker/cp-test.txt ha-150873-m04:/home/docker/cp-test_ha-150873-m03_ha-150873-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 ssh -n ha-150873-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 ssh -n ha-150873-m04 "sudo cat /home/docker/cp-test_ha-150873-m03_ha-150873-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 cp testdata/cp-test.txt ha-150873-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 ssh -n ha-150873-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 cp ha-150873-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile947152864/001/cp-test_ha-150873-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 ssh -n ha-150873-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 cp ha-150873-m04:/home/docker/cp-test.txt ha-150873:/home/docker/cp-test_ha-150873-m04_ha-150873.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 ssh -n ha-150873-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 ssh -n ha-150873 "sudo cat /home/docker/cp-test_ha-150873-m04_ha-150873.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 cp ha-150873-m04:/home/docker/cp-test.txt ha-150873-m02:/home/docker/cp-test_ha-150873-m04_ha-150873-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 ssh -n ha-150873-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 ssh -n ha-150873-m02 "sudo cat /home/docker/cp-test_ha-150873-m04_ha-150873-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 cp ha-150873-m04:/home/docker/cp-test.txt ha-150873-m03:/home/docker/cp-test_ha-150873-m04_ha-150873-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 ssh -n ha-150873-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 ssh -n ha-150873-m03 "sudo cat /home/docker/cp-test_ha-150873-m04_ha-150873-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (3.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-150873 node stop m02 -v=7 --alsologtostderr: (3.316276426s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-150873 status -v=7 --alsologtostderr: exit status 7 (652.729041ms)

                                                
                                                
-- stdout --
	ha-150873
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-150873-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-150873-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-150873-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0410 21:48:26.351250   27770 out.go:291] Setting OutFile to fd 1 ...
	I0410 21:48:26.351371   27770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 21:48:26.351383   27770 out.go:304] Setting ErrFile to fd 2...
	I0410 21:48:26.351389   27770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 21:48:26.351602   27770 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 21:48:26.351802   27770 out.go:298] Setting JSON to false
	I0410 21:48:26.351838   27770 mustload.go:65] Loading cluster: ha-150873
	I0410 21:48:26.351882   27770 notify.go:220] Checking for updates...
	I0410 21:48:26.352273   27770 config.go:182] Loaded profile config "ha-150873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 21:48:26.352288   27770 status.go:255] checking status of ha-150873 ...
	I0410 21:48:26.352807   27770 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:48:26.352861   27770 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:48:26.370997   27770 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32769
	I0410 21:48:26.371457   27770 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:48:26.372147   27770 main.go:141] libmachine: Using API Version  1
	I0410 21:48:26.372208   27770 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:48:26.372579   27770 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:48:26.372775   27770 main.go:141] libmachine: (ha-150873) Calling .GetState
	I0410 21:48:26.374377   27770 status.go:330] ha-150873 host status = "Running" (err=<nil>)
	I0410 21:48:26.374392   27770 host.go:66] Checking if "ha-150873" exists ...
	I0410 21:48:26.374645   27770 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:48:26.374675   27770 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:48:26.389628   27770 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43311
	I0410 21:48:26.390052   27770 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:48:26.390517   27770 main.go:141] libmachine: Using API Version  1
	I0410 21:48:26.390546   27770 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:48:26.390893   27770 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:48:26.391104   27770 main.go:141] libmachine: (ha-150873) Calling .GetIP
	I0410 21:48:26.394267   27770 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:48:26.394813   27770 main.go:141] libmachine: (ha-150873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:93:6b", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:43:02 +0000 UTC Type:0 Mac:52:54:00:50:93:6b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-150873 Clientid:01:52:54:00:50:93:6b}
	I0410 21:48:26.394842   27770 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined IP address 192.168.39.12 and MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:48:26.394978   27770 host.go:66] Checking if "ha-150873" exists ...
	I0410 21:48:26.395266   27770 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:48:26.395301   27770 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:48:26.410542   27770 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44325
	I0410 21:48:26.411059   27770 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:48:26.411563   27770 main.go:141] libmachine: Using API Version  1
	I0410 21:48:26.411601   27770 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:48:26.411942   27770 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:48:26.412152   27770 main.go:141] libmachine: (ha-150873) Calling .DriverName
	I0410 21:48:26.412341   27770 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0410 21:48:26.412374   27770 main.go:141] libmachine: (ha-150873) Calling .GetSSHHostname
	I0410 21:48:26.415450   27770 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:48:26.415881   27770 main.go:141] libmachine: (ha-150873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:93:6b", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:43:02 +0000 UTC Type:0 Mac:52:54:00:50:93:6b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-150873 Clientid:01:52:54:00:50:93:6b}
	I0410 21:48:26.415909   27770 main.go:141] libmachine: (ha-150873) DBG | domain ha-150873 has defined IP address 192.168.39.12 and MAC address 52:54:00:50:93:6b in network mk-ha-150873
	I0410 21:48:26.416053   27770 main.go:141] libmachine: (ha-150873) Calling .GetSSHPort
	I0410 21:48:26.416222   27770 main.go:141] libmachine: (ha-150873) Calling .GetSSHKeyPath
	I0410 21:48:26.416389   27770 main.go:141] libmachine: (ha-150873) Calling .GetSSHUsername
	I0410 21:48:26.416583   27770 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/ha-150873/id_rsa Username:docker}
	I0410 21:48:26.497380   27770 ssh_runner.go:195] Run: systemctl --version
	I0410 21:48:26.505272   27770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 21:48:26.521711   27770 kubeconfig.go:125] found "ha-150873" server: "https://192.168.39.254:8443"
	I0410 21:48:26.521738   27770 api_server.go:166] Checking apiserver status ...
	I0410 21:48:26.521769   27770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 21:48:26.538713   27770 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1233/cgroup
	W0410 21:48:26.549278   27770 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1233/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0410 21:48:26.549331   27770 ssh_runner.go:195] Run: ls
	I0410 21:48:26.554813   27770 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0410 21:48:26.558951   27770 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0410 21:48:26.558970   27770 status.go:422] ha-150873 apiserver status = Running (err=<nil>)
	I0410 21:48:26.558979   27770 status.go:257] ha-150873 status: &{Name:ha-150873 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0410 21:48:26.558997   27770 status.go:255] checking status of ha-150873-m02 ...
	I0410 21:48:26.559373   27770 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:48:26.559418   27770 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:48:26.574440   27770 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38067
	I0410 21:48:26.574864   27770 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:48:26.575283   27770 main.go:141] libmachine: Using API Version  1
	I0410 21:48:26.575301   27770 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:48:26.575630   27770 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:48:26.575890   27770 main.go:141] libmachine: (ha-150873-m02) Calling .GetState
	I0410 21:48:26.577763   27770 status.go:330] ha-150873-m02 host status = "Stopped" (err=<nil>)
	I0410 21:48:26.577776   27770 status.go:343] host is not running, skipping remaining checks
	I0410 21:48:26.577784   27770 status.go:257] ha-150873-m02 status: &{Name:ha-150873-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0410 21:48:26.577817   27770 status.go:255] checking status of ha-150873-m03 ...
	I0410 21:48:26.578081   27770 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:48:26.578124   27770 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:48:26.592538   27770 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35093
	I0410 21:48:26.592955   27770 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:48:26.593391   27770 main.go:141] libmachine: Using API Version  1
	I0410 21:48:26.593417   27770 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:48:26.593792   27770 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:48:26.594011   27770 main.go:141] libmachine: (ha-150873-m03) Calling .GetState
	I0410 21:48:26.595506   27770 status.go:330] ha-150873-m03 host status = "Running" (err=<nil>)
	I0410 21:48:26.595524   27770 host.go:66] Checking if "ha-150873-m03" exists ...
	I0410 21:48:26.595891   27770 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:48:26.595955   27770 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:48:26.611878   27770 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39495
	I0410 21:48:26.612429   27770 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:48:26.612959   27770 main.go:141] libmachine: Using API Version  1
	I0410 21:48:26.612986   27770 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:48:26.613362   27770 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:48:26.613558   27770 main.go:141] libmachine: (ha-150873-m03) Calling .GetIP
	I0410 21:48:26.616731   27770 main.go:141] libmachine: (ha-150873-m03) DBG | domain ha-150873-m03 has defined MAC address 52:54:00:07:78:28 in network mk-ha-150873
	I0410 21:48:26.617224   27770 main.go:141] libmachine: (ha-150873-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:78:28", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:46:11 +0000 UTC Type:0 Mac:52:54:00:07:78:28 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:ha-150873-m03 Clientid:01:52:54:00:07:78:28}
	I0410 21:48:26.617269   27770 main.go:141] libmachine: (ha-150873-m03) DBG | domain ha-150873-m03 has defined IP address 192.168.39.143 and MAC address 52:54:00:07:78:28 in network mk-ha-150873
	I0410 21:48:26.617457   27770 host.go:66] Checking if "ha-150873-m03" exists ...
	I0410 21:48:26.617919   27770 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:48:26.617965   27770 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:48:26.633561   27770 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44133
	I0410 21:48:26.633949   27770 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:48:26.634382   27770 main.go:141] libmachine: Using API Version  1
	I0410 21:48:26.634406   27770 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:48:26.634775   27770 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:48:26.634991   27770 main.go:141] libmachine: (ha-150873-m03) Calling .DriverName
	I0410 21:48:26.635176   27770 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0410 21:48:26.635209   27770 main.go:141] libmachine: (ha-150873-m03) Calling .GetSSHHostname
	I0410 21:48:26.637952   27770 main.go:141] libmachine: (ha-150873-m03) DBG | domain ha-150873-m03 has defined MAC address 52:54:00:07:78:28 in network mk-ha-150873
	I0410 21:48:26.638384   27770 main.go:141] libmachine: (ha-150873-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:78:28", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:46:11 +0000 UTC Type:0 Mac:52:54:00:07:78:28 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:ha-150873-m03 Clientid:01:52:54:00:07:78:28}
	I0410 21:48:26.638425   27770 main.go:141] libmachine: (ha-150873-m03) DBG | domain ha-150873-m03 has defined IP address 192.168.39.143 and MAC address 52:54:00:07:78:28 in network mk-ha-150873
	I0410 21:48:26.638573   27770 main.go:141] libmachine: (ha-150873-m03) Calling .GetSSHPort
	I0410 21:48:26.638748   27770 main.go:141] libmachine: (ha-150873-m03) Calling .GetSSHKeyPath
	I0410 21:48:26.638977   27770 main.go:141] libmachine: (ha-150873-m03) Calling .GetSSHUsername
	I0410 21:48:26.639130   27770 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/ha-150873-m03/id_rsa Username:docker}
	I0410 21:48:26.724536   27770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 21:48:26.740679   27770 kubeconfig.go:125] found "ha-150873" server: "https://192.168.39.254:8443"
	I0410 21:48:26.740711   27770 api_server.go:166] Checking apiserver status ...
	I0410 21:48:26.740753   27770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 21:48:26.756784   27770 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1534/cgroup
	W0410 21:48:26.766744   27770 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1534/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0410 21:48:26.766803   27770 ssh_runner.go:195] Run: ls
	I0410 21:48:26.772772   27770 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0410 21:48:26.781412   27770 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0410 21:48:26.781440   27770 status.go:422] ha-150873-m03 apiserver status = Running (err=<nil>)
	I0410 21:48:26.781452   27770 status.go:257] ha-150873-m03 status: &{Name:ha-150873-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0410 21:48:26.781468   27770 status.go:255] checking status of ha-150873-m04 ...
	I0410 21:48:26.781807   27770 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:48:26.781852   27770 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:48:26.796972   27770 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34785
	I0410 21:48:26.797366   27770 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:48:26.797825   27770 main.go:141] libmachine: Using API Version  1
	I0410 21:48:26.797846   27770 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:48:26.798157   27770 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:48:26.798321   27770 main.go:141] libmachine: (ha-150873-m04) Calling .GetState
	I0410 21:48:26.799871   27770 status.go:330] ha-150873-m04 host status = "Running" (err=<nil>)
	I0410 21:48:26.799887   27770 host.go:66] Checking if "ha-150873-m04" exists ...
	I0410 21:48:26.800239   27770 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:48:26.800286   27770 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:48:26.816298   27770 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37885
	I0410 21:48:26.816685   27770 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:48:26.817136   27770 main.go:141] libmachine: Using API Version  1
	I0410 21:48:26.817164   27770 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:48:26.817462   27770 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:48:26.817640   27770 main.go:141] libmachine: (ha-150873-m04) Calling .GetIP
	I0410 21:48:26.820390   27770 main.go:141] libmachine: (ha-150873-m04) DBG | domain ha-150873-m04 has defined MAC address 52:54:00:56:5f:bd in network mk-ha-150873
	I0410 21:48:26.820804   27770 main.go:141] libmachine: (ha-150873-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:5f:bd", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:47:37 +0000 UTC Type:0 Mac:52:54:00:56:5f:bd Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:ha-150873-m04 Clientid:01:52:54:00:56:5f:bd}
	I0410 21:48:26.820831   27770 main.go:141] libmachine: (ha-150873-m04) DBG | domain ha-150873-m04 has defined IP address 192.168.39.144 and MAC address 52:54:00:56:5f:bd in network mk-ha-150873
	I0410 21:48:26.820989   27770 host.go:66] Checking if "ha-150873-m04" exists ...
	I0410 21:48:26.821269   27770 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 21:48:26.821305   27770 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 21:48:26.835710   27770 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38425
	I0410 21:48:26.836187   27770 main.go:141] libmachine: () Calling .GetVersion
	I0410 21:48:26.836687   27770 main.go:141] libmachine: Using API Version  1
	I0410 21:48:26.836708   27770 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 21:48:26.836980   27770 main.go:141] libmachine: () Calling .GetMachineName
	I0410 21:48:26.837162   27770 main.go:141] libmachine: (ha-150873-m04) Calling .DriverName
	I0410 21:48:26.837331   27770 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0410 21:48:26.837349   27770 main.go:141] libmachine: (ha-150873-m04) Calling .GetSSHHostname
	I0410 21:48:26.840356   27770 main.go:141] libmachine: (ha-150873-m04) DBG | domain ha-150873-m04 has defined MAC address 52:54:00:56:5f:bd in network mk-ha-150873
	I0410 21:48:26.840818   27770 main.go:141] libmachine: (ha-150873-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:5f:bd", ip: ""} in network mk-ha-150873: {Iface:virbr1 ExpiryTime:2024-04-10 22:47:37 +0000 UTC Type:0 Mac:52:54:00:56:5f:bd Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:ha-150873-m04 Clientid:01:52:54:00:56:5f:bd}
	I0410 21:48:26.840860   27770 main.go:141] libmachine: (ha-150873-m04) DBG | domain ha-150873-m04 has defined IP address 192.168.39.144 and MAC address 52:54:00:56:5f:bd in network mk-ha-150873
	I0410 21:48:26.841072   27770 main.go:141] libmachine: (ha-150873-m04) Calling .GetSSHPort
	I0410 21:48:26.841259   27770 main.go:141] libmachine: (ha-150873-m04) Calling .GetSSHKeyPath
	I0410 21:48:26.841520   27770 main.go:141] libmachine: (ha-150873-m04) Calling .GetSSHUsername
	I0410 21:48:26.841706   27770 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/ha-150873-m04/id_rsa Username:docker}
	I0410 21:48:26.930983   27770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 21:48:26.946299   27770 status.go:257] ha-150873-m04 status: &{Name:ha-150873-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (3.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (45.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-150873 node start m02 -v=7 --alsologtostderr: (45.020466091s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (45.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-150873 node delete m03 -v=7 --alsologtostderr: (16.722171863s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (376.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-150873 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0410 21:58:22.655853   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
E0410 22:01:54.112217   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
E0410 22:01:59.610095   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
E0410 22:03:17.159044   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-150873 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (6m15.540832669s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (376.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-150873 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-150873 --control-plane -v=7 --alsologtostderr: (1m14.850382544s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-150873 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.56s)

                                                
                                    
x
+
TestJSONOutput/start/Command (53.89s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-445974 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-445974 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (53.893187184s)
--- PASS: TestJSONOutput/start/Command (53.89s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.83s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-445974 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.83s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-445974 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (9.55s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-445974 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-445974 --output=json --user=testUser: (9.546265226s)
--- PASS: TestJSONOutput/stop/Command (9.55s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-290917 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-290917 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (75.478722ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e8287a6f-bec7-4148-8f7a-c3793a1588d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-290917] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0df9f17c-cb36-473b-b8f2-ebcb5490233f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18610"}}
	{"specversion":"1.0","id":"f08f8ada-497d-43c7-a346-ef5c7667cfad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"18e34b04-2fcd-4680-b5af-923dbfe8daef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig"}}
	{"specversion":"1.0","id":"03f5887a-8fff-454f-9212-55bde141f417","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube"}}
	{"specversion":"1.0","id":"7afece48-59cf-4e1f-8b78-335dd3061d68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a38d1499-e7fb-4bb4-bb43-baf28c1be649","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"aca0b92d-9745-47e2-8b2a-3bb2126fedca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-290917" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-290917
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (94.65s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-766316 --driver=kvm2  --container-runtime=crio
E0410 22:06:54.114674   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
E0410 22:06:59.611202   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-766316 --driver=kvm2  --container-runtime=crio: (45.796259115s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-768258 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-768258 --driver=kvm2  --container-runtime=crio: (45.968549161s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-766316
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-768258
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-768258" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-768258
helpers_test.go:175: Cleaning up "first-766316" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-766316
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-766316: (1.002129921s)
--- PASS: TestMinikubeProfile (94.65s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.38s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-169148 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-169148 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.383795571s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-169148 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-169148 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (26.41s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-181195 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-181195 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.408332041s)
--- PASS: TestMountStart/serial/StartWithMountSecond (26.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-181195 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-181195 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.9s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-169148 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-181195 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-181195 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-181195
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-181195: (1.339281237s)
--- PASS: TestMountStart/serial/Stop (1.34s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.6s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-181195
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-181195: (21.601579202s)
--- PASS: TestMountStart/serial/RestartStopped (22.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-181195 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-181195 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (103.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-824789 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-824789 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m43.30113397s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (103.71s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824789 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824789 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-824789 -- rollout status deployment/busybox: (4.014854429s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824789 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824789 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824789 -- exec busybox-7fdf7869d9-6cmbq -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824789 -- exec busybox-7fdf7869d9-k2ds9 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824789 -- exec busybox-7fdf7869d9-6cmbq -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824789 -- exec busybox-7fdf7869d9-k2ds9 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824789 -- exec busybox-7fdf7869d9-6cmbq -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824789 -- exec busybox-7fdf7869d9-k2ds9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.65s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824789 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824789 -- exec busybox-7fdf7869d9-6cmbq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824789 -- exec busybox-7fdf7869d9-6cmbq -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824789 -- exec busybox-7fdf7869d9-k2ds9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824789 -- exec busybox-7fdf7869d9-k2ds9 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-824789 -v 3 --alsologtostderr
E0410 22:11:54.112535   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
E0410 22:11:59.609954   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-824789 -v 3 --alsologtostderr: (40.57505711s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.17s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-824789 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 cp testdata/cp-test.txt multinode-824789:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 ssh -n multinode-824789 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 cp multinode-824789:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2014130066/001/cp-test_multinode-824789.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 ssh -n multinode-824789 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 cp multinode-824789:/home/docker/cp-test.txt multinode-824789-m02:/home/docker/cp-test_multinode-824789_multinode-824789-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 ssh -n multinode-824789 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 ssh -n multinode-824789-m02 "sudo cat /home/docker/cp-test_multinode-824789_multinode-824789-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 cp multinode-824789:/home/docker/cp-test.txt multinode-824789-m03:/home/docker/cp-test_multinode-824789_multinode-824789-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 ssh -n multinode-824789 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 ssh -n multinode-824789-m03 "sudo cat /home/docker/cp-test_multinode-824789_multinode-824789-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 cp testdata/cp-test.txt multinode-824789-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 ssh -n multinode-824789-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 cp multinode-824789-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2014130066/001/cp-test_multinode-824789-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 ssh -n multinode-824789-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 cp multinode-824789-m02:/home/docker/cp-test.txt multinode-824789:/home/docker/cp-test_multinode-824789-m02_multinode-824789.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 ssh -n multinode-824789-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 ssh -n multinode-824789 "sudo cat /home/docker/cp-test_multinode-824789-m02_multinode-824789.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 cp multinode-824789-m02:/home/docker/cp-test.txt multinode-824789-m03:/home/docker/cp-test_multinode-824789-m02_multinode-824789-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 ssh -n multinode-824789-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 ssh -n multinode-824789-m03 "sudo cat /home/docker/cp-test_multinode-824789-m02_multinode-824789-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 cp testdata/cp-test.txt multinode-824789-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 ssh -n multinode-824789-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 cp multinode-824789-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2014130066/001/cp-test_multinode-824789-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 ssh -n multinode-824789-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 cp multinode-824789-m03:/home/docker/cp-test.txt multinode-824789:/home/docker/cp-test_multinode-824789-m03_multinode-824789.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 ssh -n multinode-824789-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 ssh -n multinode-824789 "sudo cat /home/docker/cp-test_multinode-824789-m03_multinode-824789.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 cp multinode-824789-m03:/home/docker/cp-test.txt multinode-824789-m02:/home/docker/cp-test_multinode-824789-m03_multinode-824789-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 ssh -n multinode-824789-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 ssh -n multinode-824789-m02 "sudo cat /home/docker/cp-test_multinode-824789-m03_multinode-824789-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.46s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-824789 node stop m03: (1.564645493s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-824789 status: exit status 7 (440.398163ms)

                                                
                                                
-- stdout --
	multinode-824789
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-824789-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-824789-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-824789 status --alsologtostderr: exit status 7 (435.915214ms)

                                                
                                                
-- stdout --
	multinode-824789
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-824789-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-824789-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0410 22:12:20.800041   39465 out.go:291] Setting OutFile to fd 1 ...
	I0410 22:12:20.800164   39465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:12:20.800175   39465 out.go:304] Setting ErrFile to fd 2...
	I0410 22:12:20.800179   39465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:12:20.800427   39465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 22:12:20.800635   39465 out.go:298] Setting JSON to false
	I0410 22:12:20.800666   39465 mustload.go:65] Loading cluster: multinode-824789
	I0410 22:12:20.800703   39465 notify.go:220] Checking for updates...
	I0410 22:12:20.801070   39465 config.go:182] Loaded profile config "multinode-824789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:12:20.801085   39465 status.go:255] checking status of multinode-824789 ...
	I0410 22:12:20.801485   39465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:12:20.801548   39465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:12:20.817693   39465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44897
	I0410 22:12:20.818093   39465 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:12:20.818625   39465 main.go:141] libmachine: Using API Version  1
	I0410 22:12:20.818646   39465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:12:20.819018   39465 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:12:20.819246   39465 main.go:141] libmachine: (multinode-824789) Calling .GetState
	I0410 22:12:20.820644   39465 status.go:330] multinode-824789 host status = "Running" (err=<nil>)
	I0410 22:12:20.820661   39465 host.go:66] Checking if "multinode-824789" exists ...
	I0410 22:12:20.820981   39465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:12:20.821022   39465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:12:20.836123   39465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37111
	I0410 22:12:20.836575   39465 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:12:20.837056   39465 main.go:141] libmachine: Using API Version  1
	I0410 22:12:20.837079   39465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:12:20.837403   39465 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:12:20.837651   39465 main.go:141] libmachine: (multinode-824789) Calling .GetIP
	I0410 22:12:20.840999   39465 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:12:20.841427   39465 main.go:141] libmachine: (multinode-824789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:10:8f", ip: ""} in network mk-multinode-824789: {Iface:virbr1 ExpiryTime:2024-04-10 23:09:54 +0000 UTC Type:0 Mac:52:54:00:af:10:8f Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-824789 Clientid:01:52:54:00:af:10:8f}
	I0410 22:12:20.841458   39465 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined IP address 192.168.39.94 and MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:12:20.841610   39465 host.go:66] Checking if "multinode-824789" exists ...
	I0410 22:12:20.841887   39465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:12:20.841927   39465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:12:20.856838   39465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39771
	I0410 22:12:20.857330   39465 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:12:20.857835   39465 main.go:141] libmachine: Using API Version  1
	I0410 22:12:20.857858   39465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:12:20.858158   39465 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:12:20.858373   39465 main.go:141] libmachine: (multinode-824789) Calling .DriverName
	I0410 22:12:20.858562   39465 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0410 22:12:20.858604   39465 main.go:141] libmachine: (multinode-824789) Calling .GetSSHHostname
	I0410 22:12:20.861757   39465 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:12:20.862196   39465 main.go:141] libmachine: (multinode-824789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:10:8f", ip: ""} in network mk-multinode-824789: {Iface:virbr1 ExpiryTime:2024-04-10 23:09:54 +0000 UTC Type:0 Mac:52:54:00:af:10:8f Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-824789 Clientid:01:52:54:00:af:10:8f}
	I0410 22:12:20.862230   39465 main.go:141] libmachine: (multinode-824789) DBG | domain multinode-824789 has defined IP address 192.168.39.94 and MAC address 52:54:00:af:10:8f in network mk-multinode-824789
	I0410 22:12:20.862372   39465 main.go:141] libmachine: (multinode-824789) Calling .GetSSHPort
	I0410 22:12:20.862573   39465 main.go:141] libmachine: (multinode-824789) Calling .GetSSHKeyPath
	I0410 22:12:20.862746   39465 main.go:141] libmachine: (multinode-824789) Calling .GetSSHUsername
	I0410 22:12:20.862900   39465 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/multinode-824789/id_rsa Username:docker}
	I0410 22:12:20.945578   39465 ssh_runner.go:195] Run: systemctl --version
	I0410 22:12:20.951798   39465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:12:20.969274   39465 kubeconfig.go:125] found "multinode-824789" server: "https://192.168.39.94:8443"
	I0410 22:12:20.969308   39465 api_server.go:166] Checking apiserver status ...
	I0410 22:12:20.969339   39465 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0410 22:12:20.986451   39465 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1128/cgroup
	W0410 22:12:20.997206   39465 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1128/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0410 22:12:20.997265   39465 ssh_runner.go:195] Run: ls
	I0410 22:12:21.001969   39465 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8443/healthz ...
	I0410 22:12:21.006445   39465 api_server.go:279] https://192.168.39.94:8443/healthz returned 200:
	ok
	I0410 22:12:21.006467   39465 status.go:422] multinode-824789 apiserver status = Running (err=<nil>)
	I0410 22:12:21.006476   39465 status.go:257] multinode-824789 status: &{Name:multinode-824789 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0410 22:12:21.006491   39465 status.go:255] checking status of multinode-824789-m02 ...
	I0410 22:12:21.006811   39465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:12:21.006848   39465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:12:21.022590   39465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34791
	I0410 22:12:21.023049   39465 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:12:21.023509   39465 main.go:141] libmachine: Using API Version  1
	I0410 22:12:21.023536   39465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:12:21.023858   39465 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:12:21.024048   39465 main.go:141] libmachine: (multinode-824789-m02) Calling .GetState
	I0410 22:12:21.025629   39465 status.go:330] multinode-824789-m02 host status = "Running" (err=<nil>)
	I0410 22:12:21.025644   39465 host.go:66] Checking if "multinode-824789-m02" exists ...
	I0410 22:12:21.025907   39465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:12:21.025938   39465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:12:21.040625   39465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37593
	I0410 22:12:21.041065   39465 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:12:21.041514   39465 main.go:141] libmachine: Using API Version  1
	I0410 22:12:21.041535   39465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:12:21.041858   39465 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:12:21.042051   39465 main.go:141] libmachine: (multinode-824789-m02) Calling .GetIP
	I0410 22:12:21.044845   39465 main.go:141] libmachine: (multinode-824789-m02) DBG | domain multinode-824789-m02 has defined MAC address 52:54:00:ed:da:b2 in network mk-multinode-824789
	I0410 22:12:21.045225   39465 main.go:141] libmachine: (multinode-824789-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:da:b2", ip: ""} in network mk-multinode-824789: {Iface:virbr1 ExpiryTime:2024-04-10 23:10:57 +0000 UTC Type:0 Mac:52:54:00:ed:da:b2 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-824789-m02 Clientid:01:52:54:00:ed:da:b2}
	I0410 22:12:21.045264   39465 main.go:141] libmachine: (multinode-824789-m02) DBG | domain multinode-824789-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:ed:da:b2 in network mk-multinode-824789
	I0410 22:12:21.045400   39465 host.go:66] Checking if "multinode-824789-m02" exists ...
	I0410 22:12:21.045695   39465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:12:21.045730   39465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:12:21.060390   39465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37123
	I0410 22:12:21.060783   39465 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:12:21.061210   39465 main.go:141] libmachine: Using API Version  1
	I0410 22:12:21.061231   39465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:12:21.061534   39465 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:12:21.061695   39465 main.go:141] libmachine: (multinode-824789-m02) Calling .DriverName
	I0410 22:12:21.061952   39465 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0410 22:12:21.061982   39465 main.go:141] libmachine: (multinode-824789-m02) Calling .GetSSHHostname
	I0410 22:12:21.064519   39465 main.go:141] libmachine: (multinode-824789-m02) DBG | domain multinode-824789-m02 has defined MAC address 52:54:00:ed:da:b2 in network mk-multinode-824789
	I0410 22:12:21.064883   39465 main.go:141] libmachine: (multinode-824789-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:da:b2", ip: ""} in network mk-multinode-824789: {Iface:virbr1 ExpiryTime:2024-04-10 23:10:57 +0000 UTC Type:0 Mac:52:54:00:ed:da:b2 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-824789-m02 Clientid:01:52:54:00:ed:da:b2}
	I0410 22:12:21.064903   39465 main.go:141] libmachine: (multinode-824789-m02) DBG | domain multinode-824789-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:ed:da:b2 in network mk-multinode-824789
	I0410 22:12:21.065057   39465 main.go:141] libmachine: (multinode-824789-m02) Calling .GetSSHPort
	I0410 22:12:21.065200   39465 main.go:141] libmachine: (multinode-824789-m02) Calling .GetSSHKeyPath
	I0410 22:12:21.065376   39465 main.go:141] libmachine: (multinode-824789-m02) Calling .GetSSHUsername
	I0410 22:12:21.065568   39465 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18610-5679/.minikube/machines/multinode-824789-m02/id_rsa Username:docker}
	I0410 22:12:21.147637   39465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0410 22:12:21.162334   39465 status.go:257] multinode-824789-m02 status: &{Name:multinode-824789-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0410 22:12:21.162386   39465 status.go:255] checking status of multinode-824789-m03 ...
	I0410 22:12:21.162749   39465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0410 22:12:21.162795   39465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0410 22:12:21.177773   39465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33619
	I0410 22:12:21.178192   39465 main.go:141] libmachine: () Calling .GetVersion
	I0410 22:12:21.178766   39465 main.go:141] libmachine: Using API Version  1
	I0410 22:12:21.178787   39465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0410 22:12:21.179167   39465 main.go:141] libmachine: () Calling .GetMachineName
	I0410 22:12:21.179457   39465 main.go:141] libmachine: (multinode-824789-m03) Calling .GetState
	I0410 22:12:21.181154   39465 status.go:330] multinode-824789-m03 host status = "Stopped" (err=<nil>)
	I0410 22:12:21.181169   39465 status.go:343] host is not running, skipping remaining checks
	I0410 22:12:21.181177   39465 status.go:257] multinode-824789-m03 status: &{Name:multinode-824789-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.44s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (30.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-824789 node start m03 -v=7 --alsologtostderr: (29.6424197s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (30.29s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-824789 node delete m03: (1.965257806s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.51s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (168.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-824789 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0410 22:21:54.112557   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
E0410 22:21:59.609626   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-824789 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m47.669001654s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824789 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (168.22s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-824789
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-824789-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-824789-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (74.673404ms)

                                                
                                                
-- stdout --
	* [multinode-824789-m02] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18610
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-824789-m02' is duplicated with machine name 'multinode-824789-m02' in profile 'multinode-824789'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-824789-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-824789-m03 --driver=kvm2  --container-runtime=crio: (45.97885078s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-824789
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-824789: exit status 80 (224.431772ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-824789 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-824789-m03 already exists in multinode-824789-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-824789-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.15s)

                                                
                                    
x
+
TestScheduledStopUnix (115.8s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-633630 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-633630 --memory=2048 --driver=kvm2  --container-runtime=crio: (44.103208934s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-633630 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-633630 -n scheduled-stop-633630
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-633630 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-633630 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-633630 -n scheduled-stop-633630
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-633630
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-633630 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-633630
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-633630: exit status 7 (80.698224ms)

                                                
                                                
-- stdout --
	scheduled-stop-633630
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-633630 -n scheduled-stop-633630
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-633630 -n scheduled-stop-633630: exit status 7 (73.571703ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-633630" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-633630
--- PASS: TestScheduledStopUnix (115.80s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (232.8s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1522957617 start -p running-upgrade-869202 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0410 22:31:42.657451   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
E0410 22:31:54.112585   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
E0410 22:31:59.610522   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1522957617 start -p running-upgrade-869202 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m5.519933544s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-869202 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-869202 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m43.723967806s)
helpers_test.go:175: Cleaning up "running-upgrade-869202" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-869202
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-869202: (1.180750352s)
--- PASS: TestRunningBinaryUpgrade (232.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-857710 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-857710 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (93.538258ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-857710] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18610
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (95.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-857710 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-857710 --driver=kvm2  --container-runtime=crio: (1m34.998315328s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-857710 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (95.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-688825 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-688825 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (923.854087ms)

                                                
                                                
-- stdout --
	* [false-688825] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18610
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0410 22:32:24.936579   48178 out.go:291] Setting OutFile to fd 1 ...
	I0410 22:32:24.936907   48178 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:32:24.936920   48178 out.go:304] Setting ErrFile to fd 2...
	I0410 22:32:24.936931   48178 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0410 22:32:24.937234   48178 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18610-5679/.minikube/bin
	I0410 22:32:24.938014   48178 out.go:298] Setting JSON to false
	I0410 22:32:24.939305   48178 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4487,"bootTime":1712783858,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0410 22:32:24.939388   48178 start.go:139] virtualization: kvm guest
	I0410 22:32:24.941896   48178 out.go:177] * [false-688825] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0410 22:32:24.943355   48178 out.go:177]   - MINIKUBE_LOCATION=18610
	I0410 22:32:24.943363   48178 notify.go:220] Checking for updates...
	I0410 22:32:24.944849   48178 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0410 22:32:24.946471   48178 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18610-5679/kubeconfig
	I0410 22:32:24.948059   48178 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18610-5679/.minikube
	I0410 22:32:24.949334   48178 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0410 22:32:24.950642   48178 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0410 22:32:24.952612   48178 config.go:182] Loaded profile config "NoKubernetes-857710": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:32:24.952800   48178 config.go:182] Loaded profile config "offline-crio-874231": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0410 22:32:24.952943   48178 config.go:182] Loaded profile config "running-upgrade-869202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0410 22:32:24.953060   48178 driver.go:392] Setting default libvirt URI to qemu:///system
	I0410 22:32:25.791193   48178 out.go:177] * Using the kvm2 driver based on user configuration
	I0410 22:32:25.792576   48178 start.go:297] selected driver: kvm2
	I0410 22:32:25.792595   48178 start.go:901] validating driver "kvm2" against <nil>
	I0410 22:32:25.792607   48178 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0410 22:32:25.794823   48178 out.go:177] 
	W0410 22:32:25.796289   48178 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0410 22:32:25.797545   48178 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-688825 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-688825

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-688825

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-688825

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-688825

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-688825

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-688825

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-688825

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-688825

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-688825

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-688825

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-688825

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-688825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-688825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-688825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-688825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-688825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-688825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-688825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-688825" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-688825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-688825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-688825" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-688825

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-688825"

                                                
                                                
----------------------- debugLogs end: false-688825 [took: 3.648353823s] --------------------------------
helpers_test.go:175: Cleaning up "false-688825" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-688825
--- PASS: TestNetworkPlugins/group/false (4.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (43.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-857710 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-857710 --no-kubernetes --driver=kvm2  --container-runtime=crio: (42.05643604s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-857710 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-857710 status -o json: exit status 2 (281.488262ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-857710","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-857710
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-857710: (1.201934618s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (43.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (53.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-857710 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-857710 --no-kubernetes --driver=kvm2  --container-runtime=crio: (53.827785674s)
--- PASS: TestNoKubernetes/serial/Start (53.83s)

                                                
                                    
x
+
TestPause/serial/Start (104.5s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-262675 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-262675 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m44.499013076s)
--- PASS: TestPause/serial/Start (104.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-857710 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-857710 "sudo systemctl is-active --quiet service kubelet": exit status 1 (216.257771ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (29.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (14.379278143s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (14.885528517s)
--- PASS: TestNoKubernetes/serial/ProfileList (29.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-857710
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-857710: (2.166289127s)
--- PASS: TestNoKubernetes/serial/Stop (2.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (25.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-857710 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-857710 --driver=kvm2  --container-runtime=crio: (25.223454653s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (25.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-857710 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-857710 "sudo systemctl is-active --quiet service kubelet": exit status 1 (221.30826ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.38s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (124.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2776406316 start -p stopped-upgrade-546741 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2776406316 start -p stopped-upgrade-546741 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m14.631942918s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2776406316 -p stopped-upgrade-546741 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2776406316 -p stopped-upgrade-546741 stop: (2.1419412s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-546741 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-546741 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (47.580977526s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (124.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-546741
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (132.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-646133 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-646133 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.1: (2m12.747347313s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (132.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-646133 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3967b559-8afc-47f7-bd25-9806c23d1222] Pending
helpers_test.go:344: "busybox" [3967b559-8afc-47f7-bd25-9806c23d1222] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3967b559-8afc-47f7-bd25-9806c23d1222] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.005986484s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-646133 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-646133 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-646133 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (60.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-706500 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3
E0410 22:41:54.112669   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
E0410 22:41:59.609673   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-706500 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3: (1m0.287644868s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (60.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-706500 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [32495714-5ff3-4dbe-b708-c8b861df7c9b] Pending
helpers_test.go:344: "busybox" [32495714-5ff3-4dbe-b708-c8b861df7c9b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [32495714-5ff3-4dbe-b708-c8b861df7c9b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.005667875s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-706500 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-706500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-706500 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-519831 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-519831 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3: (56.571839336s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (717.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-646133 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-646133 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.1: (11m57.380514483s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-646133 -n no-preload-646133
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (717.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (5.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-862528 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-862528 --alsologtostderr -v=3: (5.457236024s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (5.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-862528 -n old-k8s-version-862528
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-862528 -n old-k8s-version-862528: exit status 7 (85.914433ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-862528 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-519831 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3238c558-cbd6-4f3a-b18c-01cf4b1b9d0c] Pending
helpers_test.go:344: "busybox" [3238c558-cbd6-4f3a-b18c-01cf4b1b9d0c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3238c558-cbd6-4f3a-b18c-01cf4b1b9d0c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.00509077s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-519831 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-519831 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-519831 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (553.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-706500 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-706500 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3: (9m13.569165858s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-706500 -n embed-certs-706500
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (553.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (422.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-519831 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3
E0410 22:46:54.115053   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
E0410 22:46:59.610264   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
E0410 22:48:22.658315   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
E0410 22:51:54.112461   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
E0410 22:51:59.609809   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
E0410 22:53:17.161264   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-519831 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3: (7m1.818081366s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-519831 -n default-k8s-diff-port-519831
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (422.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (59.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-497448 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-497448 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.1: (59.569158357s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (59.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (67.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-688825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-688825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m7.180452441s)
--- PASS: TestNetworkPlugins/group/auto/Start (67.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-497448 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-497448 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.327330202s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-497448 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-497448 --alsologtostderr -v=3: (10.657013546s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-497448 -n newest-cni-497448
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-497448 -n newest-cni-497448: exit status 7 (82.243106ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-497448 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (39.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-497448 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-497448 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.1: (39.501527259s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-497448 -n newest-cni-497448
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (39.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-688825 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-688825 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-88b9j" [3852c739-b52f-40cd-a9ea-85ecb0f7d8a3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-88b9j" [3852c739-b52f-40cd-a9ea-85ecb0f7d8a3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004530438s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (33.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-688825 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context auto-688825 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.193529275s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context auto-688825 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context auto-688825 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.163583497s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context auto-688825 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (33.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-497448 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-497448 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-497448 -n newest-cni-497448
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-497448 -n newest-cni-497448: exit status 2 (260.006382ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-497448 -n newest-cni-497448
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-497448 -n newest-cni-497448: exit status 2 (267.573538ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-497448 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-497448 -n newest-cni-497448
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-497448 -n newest-cni-497448
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (68.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-688825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E0410 23:09:57.161634   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/functional-130509/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-688825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m8.321747941s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (68.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-688825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-688825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (95.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-688825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0410 23:10:27.140794   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/client.crt: no such file or directory
E0410 23:10:27.146059   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/client.crt: no such file or directory
E0410 23:10:27.156514   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/client.crt: no such file or directory
E0410 23:10:27.176840   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/client.crt: no such file or directory
E0410 23:10:27.217140   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/client.crt: no such file or directory
E0410 23:10:27.297860   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/client.crt: no such file or directory
E0410 23:10:27.458326   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/client.crt: no such file or directory
E0410 23:10:27.779126   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/client.crt: no such file or directory
E0410 23:10:28.419994   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/client.crt: no such file or directory
E0410 23:10:29.700530   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/client.crt: no such file or directory
E0410 23:10:32.261627   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/client.crt: no such file or directory
E0410 23:10:37.382103   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/client.crt: no such file or directory
E0410 23:10:47.622897   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-688825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m35.680048166s)
--- PASS: TestNetworkPlugins/group/calico/Start (95.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-vdq77" [8490bdd1-160b-43c7-abfa-fdbff5ead6a1] Running
E0410 23:11:08.103140   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005241719s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-688825 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-688825 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jjf6c" [f627abfe-2000-4cdb-9f7a-08e232de8d08] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jjf6c" [f627abfe-2000-4cdb-9f7a-08e232de8d08] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004684428s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (91.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-688825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-688825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m31.921281658s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (91.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-688825 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-688825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-688825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (81.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-688825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0410 23:11:44.624648   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/client.crt: no such file or directory
E0410 23:11:44.629969   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/client.crt: no such file or directory
E0410 23:11:44.640286   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/client.crt: no such file or directory
E0410 23:11:44.660582   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/client.crt: no such file or directory
E0410 23:11:44.700863   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/client.crt: no such file or directory
E0410 23:11:44.781233   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/client.crt: no such file or directory
E0410 23:11:44.941737   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/client.crt: no such file or directory
E0410 23:11:45.262390   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/client.crt: no such file or directory
E0410 23:11:45.903501   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/client.crt: no such file or directory
E0410 23:11:47.184570   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/client.crt: no such file or directory
E0410 23:11:49.063792   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/no-preload-646133/client.crt: no such file or directory
E0410 23:11:49.745369   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-688825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m21.173486371s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (81.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-519831 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-519831 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-519831 --alsologtostderr -v=1: (1.034537233s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-519831 -n default-k8s-diff-port-519831
E0410 23:11:59.609638   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/addons-577364/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-519831 -n default-k8s-diff-port-519831: exit status 2 (302.362324ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-519831 -n default-k8s-diff-port-519831
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-519831 -n default-k8s-diff-port-519831: exit status 2 (287.324584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-519831 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-519831 -n default-k8s-diff-port-519831
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-519831 -n default-k8s-diff-port-519831
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-5lv6l" [408317dc-27c6-4c78-a1e7-ac8258f72d9d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00519552s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (94.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-688825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0410 23:12:05.107002   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-688825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m34.15146191s)
--- PASS: TestNetworkPlugins/group/flannel/Start (94.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-688825 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-688825 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-k84mp" [76c1ff9c-4aaa-4afa-a899-0807abf8c7dc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-k84mp" [76c1ff9c-4aaa-4afa-a899-0807abf8c7dc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.059640054s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-688825 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-688825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-688825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (67.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-688825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-688825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m7.229851806s)
--- PASS: TestNetworkPlugins/group/bridge/Start (67.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-688825 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-688825 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4b9v8" [0376f56e-f75e-4b41-9eea-2fbaf8c10cf9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4b9v8" [0376f56e-f75e-4b41-9eea-2fbaf8c10cf9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004610921s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-688825 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-688825 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7f2tb" [a6593c82-f378-4f4b-8d86-f1de563c573b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7f2tb" [a6593c82-f378-4f4b-8d86-f1de563c573b] Running
E0410 23:13:06.548795   13001 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18610-5679/.minikube/profiles/old-k8s-version-862528/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005568885s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-688825 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-688825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-688825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-688825 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-688825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-688825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-v78tj" [09cf629c-c165-45cd-9aa9-d86a1d26816a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006162296s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-688825 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-688825 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8275r" [67d4c88b-3d84-4def-aeed-fab23373952e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8275r" [67d4c88b-3d84-4def-aeed-fab23373952e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004271408s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-688825 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-688825 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ng9kw" [005f7566-fdb0-4713-b801-0fc0cdcbc383] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-ng9kw" [005f7566-fdb0-4713-b801-0fc0cdcbc383] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003854653s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-688825 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-688825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-688825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-688825 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-688825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-688825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (39/321)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.29.3/cached-images 0
15 TestDownloadOnly/v1.29.3/binaries 0
16 TestDownloadOnly/v1.29.3/kubectl 0
23 TestDownloadOnly/v1.30.0-rc.1/cached-images 0
24 TestDownloadOnly/v1.30.0-rc.1/binaries 0
25 TestDownloadOnly/v1.30.0-rc.1/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
123 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
184 TestImageBuild 0
211 TestKicCustomNetwork 0
212 TestKicExistingNetwork 0
213 TestKicCustomSubnet 0
214 TestKicStaticIP 0
246 TestChangeNoneUser 0
249 TestScheduledStopWindows 0
251 TestSkaffold 0
253 TestInsufficientStorage 0
257 TestMissingContainerUpgrade 0
265 TestStartStop/group/disable-driver-mounts 0.17
270 TestNetworkPlugins/group/kubenet 3.25
278 TestNetworkPlugins/group/cilium 6.34
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-676292" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-676292
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-688825 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-688825

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-688825

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-688825

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-688825

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-688825

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-688825

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-688825

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-688825

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-688825

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-688825

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-688825

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-688825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-688825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-688825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-688825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-688825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-688825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-688825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-688825" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-688825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-688825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-688825" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-688825

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-688825"

                                                
                                                
----------------------- debugLogs end: kubenet-688825 [took: 3.104759846s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-688825" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-688825
--- SKIP: TestNetworkPlugins/group/kubenet (3.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-688825 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-688825

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-688825

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-688825

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-688825

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-688825

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-688825

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-688825

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-688825

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-688825

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-688825

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-688825

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-688825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-688825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-688825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-688825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-688825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-688825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-688825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-688825" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-688825

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-688825

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-688825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-688825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-688825

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-688825

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-688825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-688825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-688825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-688825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-688825" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-688825

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-688825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688825"

                                                
                                                
----------------------- debugLogs end: cilium-688825 [took: 6.167328842s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-688825" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-688825
--- SKIP: TestNetworkPlugins/group/cilium (6.34s)

                                                
                                    
Copied to clipboard